ComptonJapan
Banned
IMHO Microsoft should not even make another console. Maybe partner up with Google instead and focus on streaming.
This opinion is wrong. I personally like Microsoft hardware. Streaming is going to suck and everyone knows it.IMHO Microsoft should not even make another console. Maybe partner up with Google instead and focus on streaming.
It could be though. Vita style...If its proprietary, it wont be upgradable
From what i read online other components would crap out before the SSD dies.
If its proprietary, it wont be upgradable
You can't replace something that its fused to the boardIt could be though. Vita style...
I don't like streaming either, and sure I like the Surface Pro, I think it's a great overpriced device to compete with the overpriced Apple hardware. As far as gaming consoles go, I don't know if Microsoft delivered with the Xbox One, I mean sure the Xbox One X has great specs but it's just sitting there in my room and I only have Halo for it. I would have much rather paid a streaming subscription of 15$ a month just to play Halo and that's it.This opinion is wrong. I personally like Microsoft hardware. Streaming is going to suck and everyone knows it.
Soldering on an SSD just seems like a terrible idea regardless of the benefits.
I hope for this outcome too, but it woulnt be unthinkable for it to be fused with the board and they could always offer upgradable cold storageIt probably interfaces with any plug and play SSD or drive. I would not panic at all in this department, it would just be pointless FUD.
I hope for this outcome too, but it woulnt be unthinkable for it to be fused with the board and they could always offer upgradable cold storage
But streaming sucks... might as well buy the cheaper xboneS and play there.I would have much rather paid a streaming subscription of 15$ a month just to play Halo and that's it.
What if they are used for cold storage only?I do not think they are going to toss out external storage solutions which people use to play games as well.
Plus this way they guarantee speed across the board1tb soldiered with the option for regular hdd back. Sure you might have to wait five minutes while a game is transferred from hdd to nvme ssd, but so what? They need to make sure the UI is super clear though.
What if they are used for cold storage only?
Plus this way they guarantee speed across the board
Streaming does suck, but paying 200$ for the cheaper model is still way too expensive to play Halo. Everything else I can play on my PS4 or PC.But streaming sucks... might as well buy the cheaper xboneS and play there.
yup, use external hdd and user repleaceble drive as cold storage in order to play a game it must be installed on the internal ssd.Explain?
Edit: nvm, we are on the same page I think, since my last edit.
Is it worth it though? input lag ruins the experienceStreaming does suck, but paying 200$ for the cheaper model is still way too expensive to play Halo. Everything else I can play on my PS4 or PC.
Is it worth it? Ok, let's put it this way, you wanna fuck an escort with fake tits and the bitch charges 3,000$ an hour. Is it worth the 3,000$ just for the fake tits? You could find a chick that looks as good as her but without the fake tits, or a street whore that charges 250$ an hour but doesn't have fake tits. You could just use that 3,000$ to trick a hoe and you get multiple fucks out of it versus just an hour fuck, A financially smarter person would say Nah, it's not worth it, I'ma just go with the cheaper option or trick a hoe and pay for her implants.Is it worth it though? input lag ruins the experience
isnt halo coming to pc anyways?
lol i don't see a point in paying for condom (input lag) sex, its a downgraded experience, rather just play with myself or get a gf(console)Is it worth it? Ok, let's put it this way, you wanna fuck an escort with fake tits and the bitch charges 3,000$ an hour. Is it worth the 3,000$ just for the fake tits? You could find a chick that looks as good as her but without the fake tits, or a street whore that charges 250$ an hour but doesn't have fake tits. You could just use that 3,000$ to trick a hoe and you get multiple fucks out of it versus just an hour fuck, A financially smarter person would say Nah, it's not worth it, I'ma just go with the cheaper option or trick a hoe and pay for her implants.
Post I've found online about a Sony patent for SSD
TL;DRThis will be one for people interested in some potentially more technical speculation. I posted in the next-gen speculation thread, but was encouraged to spin it off into its own thread.
I did some patent diving to see if I could dig up any likely candidates for what Sony's SSD solution might be.
I found several Japanese SIE patents from Saito Hideyuki along with a single (combined?) US application that appear to be relevant.
The patents were filed across 2015 and 2016.
Caveat: This is an illustrative embodiment in a patent application. i.e. Maybe parts of it will make it into a product, maybe all of it, maybe none of it. Approach it speculatively.
That said, it perhaps gives an idea of what Sony has been researching. And does seem in line with what Cerny talked about in terms of customisations across the stack to optimise performance.
http://www.freepatentsonline.com/y2017/0097897.html
There's quite a lot going on, but to try and break it down:
It talks about the limitations of simply using a SSD 'as is' in a games system, and a set of hardware and software stack changes to improve performance.
Basically, 'as is', an OS uses a virtual file system, designed to virtualise a host of different I/O devices with different characteristics. Various tasks of this file system typically run on the CPU - e.g. traversing file metadata, data tamper checks, data decryption, data decompression. This processing, and interruptions on the CPU, can become a bottleneck to data transfer rates from an SSD, particularly in certain contexts e.g. opening a large number of small files.
At a lower level, SSDs typically employ a data block size aimed at generic use. They distribute blocks of data around the NAND memory to distribute wear. In order to find a file, the memory controller in the SSD has to translate a request to the physical addresses of the data blocks using a look-up table. In a regular SSD, the typical data block size might require a look-up table 1GB in size for a 1TB SSD. A SSD might typically use DRAM to cache that lookup table - so the memory controller consults DRAM before being able to retrieve the data. The patent describes this as another potential bottleneck.
Here are the hardware changes the patent proposes vs a 'typical' SSD system:
- SRAM instead of DRAM inside the SSD for lower latency and higher throughput access between the flash memory controller and the address lookup data. The patent proposes using a coarser granularity of data access for data that is written once, and not re-written - e.g. game install data. This larger block size can allow for address lookup tables as small as 32KB, instead of 1GB. Data read by the memory controller can also be buffered in SRAM for ECC checks instead of DRAM (because of changes made further up the stack, described later). The patent also notes that by ditching DRAM, reduced complexity and cost may be possible, and cost will scale better with larger SSDs that would otherwise need e.g. 2GB of DRAM for 2TB of storage, and so on.
- The SSD's read unit is 'expanded and unified' for efficient read operations.
- A secondary CPU, a DMAC, and a hardware accelerator for decoding, tamper checking and decompression.
- The main CPU, the secondary CPU, the system memory controller and the IO bus are connected by a coherent bus. The patent notes that the secondary CPU can be different in instruction set etc. from the main CPU, as long as they use the same page size and are connected by a coherent bus.
- The hardware accelerator and the IO controller are connected to the IO bus.
An illustrative diagram of the system:
At a software level, the system adds a new file system, the 'File Archive API', designed primarily for write-once data like game installs. Unlike a more generic virtual file system, it's optimised for NAND data access. It sits at the interface between the application and the NAND drivers, and the hardware accelerator drivers.
The secondary CPU handles a priority on access to the SSD. When read requests are made through the File Archive API, all other read and write requests can be prohibited to maximise read throughput.
When a read request is made by the main CPU, it sends it to the secondary CPU, which splits the request into a larger number of small data accesses. It does this for two reasons - to maximise parallel use of the NAND devices and channels (the 'expanded read unit'), and to make blocks small enough to be buffered and checked inside the SSD SRAM. The metadata the secondary CPU needs to traverse is much simpler (and thus faster to process) than under a typical virtual file system.
The NAND memory controller can be flexible about what granularity of data it uses - for data requests send through the File Archive API, it uses granularities that allow the address lookup table to be stored entirely in SRAM for minimal bottlenecking. Other granularities can be used for data that needs to be rewritten more often - user save data for example. In these cases, the SRAM partially caches the lookup tables.
When the SSD has checked its retrieved data, it's sent from SSD SRAM to kernel memory in the system RAM. The hardware accelerator then uses a DMAC to read that data, do its processing, and then write it back to user memory in system RAM. The coordination of this happens with signals between the components, and not involving the main CPU. The main CPU is then finally signalled when data is ready, but is uninvolved until that point.
A diagram illustrating data flow:
Interestingly, for a patent, it describes in some detail the processing targets required of these various components in order to meet certain data transfer rates - what you would need in terms of timings from each of the secondary CPU, the memory controller and the hardware accelerator in order for them not to be a bottleneck on the NAND data speeds:
Though I wouldn't read too much into this, in most examples it talks about what you would need to support a end-to-end transfer rate of 10GB/s.
The patent is also silent on what exactly the IO bus would be - that obviously be a key bottleneck itself on transfer rates out of the NAND devices. Until we know what that is, it's hard to know what the upper end on the transfer rates could be, but it seems a host of customisations are possible to try to maximise whatever that bus will support.
Once again, this is one described embodiment. Not necessarily what the PS5 solution will look exactly like. But it is an idea of what Sony's been researching in how to customise a SSD and software stack for faster read throughput for installed game data.
- some hardware changes vs the typical inside the SSD (SRAM for housekeeping and data buffering instead of DRAM)
- some extra hardware and accelerators in the system for handling file IO tasks independent of the main CPU
- at the OS layer, a second file system customized for these changes
all primarily aimed at higher read performance and removing potential bottlenecks for data that is written less often than it is read, like data installed from a game disc or download.
But what if it is supa dupa fast?Solded SSD is even worst than proprietary SSD lol
Is it worth it? Ok, let's put it this way, you wanna fuck an escort with fake tits and the bitch charges 3,000$ an hour. Is it worth the 3,000$ just for the fake tits? You could find a chick that looks as good as her but without the fake tits, or a street whore that charges 250$ an hour but doesn't have fake tits. You could just use that 3,000$ to trick a hoe and you get multiple fucks out of it versus just an hour fuck, A financially smarter person would say Nah, it's not worth it, I'ma just go with the cheaper option or trick a hoe and pay for her implants.
Solded SSD is even worst than proprietary SSD lol
(more RAM cache per terabyte of NAND than normal by double,
I assume they would treat external drives and any other form of user repleaceble drives as cold storage.What i wonder is if they'll have to block external hard drives, if games are to be remade to take full advantage of an SSDs architecture then they'd perform even worse on HDDs than just the increase in size for next gen, which could push load times into unacceptable for them. And what about external SSDs.
The beneficies to never replace the SDD (soldered) or pay twice the price to replace it (proprietary).Depends on what the benefits are.
The SSDs I have in my MacBook Pro are blazing fast. It makes such a huge difference.If its soldered to the board does it really matter? They could always offer user replaceable cold storage.
From the looks of it its not the SSD thats customized but the board with added hardware (memory controller, secondary procesor, sram, hw accelerator), so maybe it could be user upgradedable fast nvme.
The idea i got is that by using SRAM the cache size could be decreased considerably, doesnt that contradict the rumor?
I assume they would treat external drives and any other form of user repleaceble drives as cold storage.
If you have a PC than consider your wish granted. But just because YOU don't play it doesn't mean that 50 Million others don't. Microsoft came out on the wrong foot this gen and the previous gen was much better for them. But just because you don't do as great as your competitor selling consoles, doesn't mean you didn't make money selling games. You don't just exit the console market when you have over $10 Billion in revenue in gaming alone.I don't like streaming either, and sure I like the Surface Pro, I think it's a great overpriced device to compete with the overpriced Apple hardware. As far as gaming consoles go, I don't know if Microsoft delivered with the Xbox One, I mean sure the Xbox One X has great specs but it's just sitting there in my room and I only have Halo for it. I would have much rather paid a streaming subscription of 15$ a month just to play Halo and that's it.
Maybe, that being said 16GB would be so disappointing.Maybe both are true, the dev kit version has double the DRAM/TB to replicate what the final version with half the RAM, but faster SRAM, will have.
I hope some of the more hardcore PC review sites dig into this SSD, could be really interesting. Wonder if the Phison version for PCs will be the same.
Maybe, that being said 16GB would be so disappointing.
I thought you were at first then got confused lol.16GB? Was talking about this part, the SSD Cache
3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND)
Maybe the prototype is using double the DRAM to match the latency SRAM would provide with less of it
I'm a bit confuse here... SDD already uses DRAM for cache (about 1GB per TB of SSD) and that is what the patent talks... the patent change the 1GB DRAM to way lower amount of SRAM and that way that cache become way faster and smaller.16GB? Was talking about this part, the SSD Cache
3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND)
Maybe the prototype is using double the DRAM to match the latency SRAM would provide with less of it
yes.. that's what he is saying:I'm a bit confuse here... SDD already uses DRAM for cache (about 1GB per TB of SSD) and that is what the patent talks... the patent change the 1GB DRAM to way lower amount of SRAM and that way that cache become way faster and smaller.
But wouldn't the final version have zero dram?The dev kit version has double the DRAM/TB to replicate what the final version with half the DRAM, but faster SRAM, will have.
Read the patent, its use is justified(ignoring the patent) .. there's no way they're going to use SRAM for a SSD cache - SRAM is so fast the place to put it would be some part of main memory (or render buffer ...)
.. the SRAM would read data faster than any GDDR6/HBM2 could be written to ..
That is why they are using SRAM.(ignoring the patent) .. there's no way they're going to use SRAM for a SSD cache - SRAM is so fast the place to put it would be some part of main memory (or render buffer ...)
.. the SRAM would read data faster than any GDDR6/HBM2 could be written to ..
Post I've found online about a Sony patent for SSD
TL;DRThis will be one for people interested in some potentially more technical speculation. I posted in the next-gen speculation thread, but was encouraged to spin it off into its own thread.
I did some patent diving to see if I could dig up any likely candidates for what Sony's SSD solution might be.
I found several Japanese SIE patents from Saito Hideyuki along with a single (combined?) US application that appear to be relevant.
The patents were filed across 2015 and 2016.
Caveat: This is an illustrative embodiment in a patent application. i.e. Maybe parts of it will make it into a product, maybe all of it, maybe none of it. Approach it speculatively.
That said, it perhaps gives an idea of what Sony has been researching. And does seem in line with what Cerny talked about in terms of customisations across the stack to optimise performance.
http://www.freepatentsonline.com/y2017/0097897.html
There's quite a lot going on, but to try and break it down:
It talks about the limitations of simply using a SSD 'as is' in a games system, and a set of hardware and software stack changes to improve performance.
Basically, 'as is', an OS uses a virtual file system, designed to virtualise a host of different I/O devices with different characteristics. Various tasks of this file system typically run on the CPU - e.g. traversing file metadata, data tamper checks, data decryption, data decompression. This processing, and interruptions on the CPU, can become a bottleneck to data transfer rates from an SSD, particularly in certain contexts e.g. opening a large number of small files.
At a lower level, SSDs typically employ a data block size aimed at generic use. They distribute blocks of data around the NAND memory to distribute wear. In order to find a file, the memory controller in the SSD has to translate a request to the physical addresses of the data blocks using a look-up table. In a regular SSD, the typical data block size might require a look-up table 1GB in size for a 1TB SSD. A SSD might typically use DRAM to cache that lookup table - so the memory controller consults DRAM before being able to retrieve the data. The patent describes this as another potential bottleneck.
Here are the hardware changes the patent proposes vs a 'typical' SSD system:
- SRAM instead of DRAM inside the SSD for lower latency and higher throughput access between the flash memory controller and the address lookup data. The patent proposes using a coarser granularity of data access for data that is written once, and not re-written - e.g. game install data. This larger block size can allow for address lookup tables as small as 32KB, instead of 1GB. Data read by the memory controller can also be buffered in SRAM for ECC checks instead of DRAM (because of changes made further up the stack, described later). The patent also notes that by ditching DRAM, reduced complexity and cost may be possible, and cost will scale better with larger SSDs that would otherwise need e.g. 2GB of DRAM for 2TB of storage, and so on.
- The SSD's read unit is 'expanded and unified' for efficient read operations.
- A secondary CPU, a DMAC, and a hardware accelerator for decoding, tamper checking and decompression.
- The main CPU, the secondary CPU, the system memory controller and the IO bus are connected by a coherent bus. The patent notes that the secondary CPU can be different in instruction set etc. from the main CPU, as long as they use the same page size and are connected by a coherent bus.
- The hardware accelerator and the IO controller are connected to the IO bus.
An illustrative diagram of the system:
At a software level, the system adds a new file system, the 'File Archive API', designed primarily for write-once data like game installs. Unlike a more generic virtual file system, it's optimised for NAND data access. It sits at the interface between the application and the NAND drivers, and the hardware accelerator drivers.
The secondary CPU handles a priority on access to the SSD. When read requests are made through the File Archive API, all other read and write requests can be prohibited to maximise read throughput.
When a read request is made by the main CPU, it sends it to the secondary CPU, which splits the request into a larger number of small data accesses. It does this for two reasons - to maximise parallel use of the NAND devices and channels (the 'expanded read unit'), and to make blocks small enough to be buffered and checked inside the SSD SRAM. The metadata the secondary CPU needs to traverse is much simpler (and thus faster to process) than under a typical virtual file system.
The NAND memory controller can be flexible about what granularity of data it uses - for data requests send through the File Archive API, it uses granularities that allow the address lookup table to be stored entirely in SRAM for minimal bottlenecking. Other granularities can be used for data that needs to be rewritten more often - user save data for example. In these cases, the SRAM partially caches the lookup tables.
When the SSD has checked its retrieved data, it's sent from SSD SRAM to kernel memory in the system RAM. The hardware accelerator then uses a DMAC to read that data, do its processing, and then write it back to user memory in system RAM. The coordination of this happens with signals between the components, and not involving the main CPU. The main CPU is then finally signalled when data is ready, but is uninvolved until that point.
A diagram illustrating data flow:
Interestingly, for a patent, it describes in some detail the processing targets required of these various components in order to meet certain data transfer rates - what you would need in terms of timings from each of the secondary CPU, the memory controller and the hardware accelerator in order for them not to be a bottleneck on the NAND data speeds:
Though I wouldn't read too much into this, in most examples it talks about what you would need to support a end-to-end transfer rate of 10GB/s.
The patent is also silent on what exactly the IO bus would be - that obviously be a key bottleneck itself on transfer rates out of the NAND devices. Until we know what that is, it's hard to know what the upper end on the transfer rates could be, but it seems a host of customisations are possible to try to maximise whatever that bus will support.
Once again, this is one described embodiment. Not necessarily what the PS5 solution will look exactly like. But it is an idea of what Sony's been researching in how to customise a SSD and software stack for faster read throughput for installed game data.
- some hardware changes vs the typical inside the SSD (SRAM for housekeeping and data buffering instead of DRAM)
- some extra hardware and accelerators in the system for handling file IO tasks independent of the main CPU
- at the OS layer, a second file system customized for these changes
all primarily aimed at higher read performance and removing potential bottlenecks for data that is written less often than it is read, like data installed from a game disc or download.
Is it worth it? Ok, let's put it this way, you wanna fuck an escort with fake tits and the bitch charges 3,000$ an hour. Is it worth the 3,000$ just for the fake tits? You could find a chick that looks as good as her but without the fake tits, or a street whore that charges 250$ an hour but doesn't have fake tits. You could just use that 3,000$ to trick a hoe and you get multiple fucks out of it versus just an hour fuck, A financially smarter person would say Nah, it's not worth it, I'ma just go with the cheaper option or trick a hoe and pay for her implants.
yes.. that's what he is saying:
But wouldn't the final version have zero dram?
I'm a bit confuse here... SDD already uses DRAM for cache (about 1GB per TB of SSD) and that is what the patent talks... the patent change the 1GB DRAM to way lower amount of SRAM and that way that cache become way faster and smaller.
looooolI said RAM, not sure how that D snuck in there
2x DRAM prototype -> 1X SRAM final
(ignoring the patent) .. there's no way they're going to use SRAM for a SSD cache - SRAM is so fast the place to put it would be some part of main memory (or render buffer ...)
.. the SRAM would read data faster than any GDDR6/HBM2 could be written to ..
Solded SSD is even worst than proprietary SSD lol
SSD should be an standard replaceable PC part... if not Sony will ask you the same they did with the infamous Vita Cards lol
Manage? 1TB is enough for what? 20 games on PS4... 10 on PS5?If you can't manage 1 TB of space for games you deserve to pay the price.
Manage? 1TB is enough for what? 20 games on PS4... 10 on PS5?
So Housemarque and Sony are in good relations again. Good for them.
Amen.Got to build that next voxel-like Resogun showpiece for the PS5 launch!