Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Me neither.

Considering how long of a leash Xbox has had from Xbox execs since Xbox OG, it looks like they'll dabble with some losses here and there, but can't get too out of hand.

Never happening now they have a somewhat hardware agnostic replacement for console hardware with Xcloud/GamePass.

Hardware has always been a "necessary evil" for MS, which is why they are moving past it in search of MAU revenue.
 
Never happening now they have a somewhat hardware agnostic replacement for console hardware with Xcloud/GamePass.

Hardware has always been a "necessary evil" for MS, which is why they are moving past it in search of MAU revenue.

more sold consoles = more people in their ecosystem

so it absolutely still matters.
 
I've got this bad feeling that the github leaks are the true final numbers and Sony is trying to use the new cooling system to counteract their high clock speed.

Also, have people debunked the idea that Sony was going hardware route for backwards compatibility and this is the reason why the CU count will be 32?


Edit:Those 4Chan specs make no sense when the rumor is that the Series X will be more expensive.

GitHub makes absolutely no sense because it would be the most inefficient and costly way to get to 9 TF

No sense at all.

If the coooling solution is real it's because they've already maxed out die size and are increasing the clocks simply because they have the headroom.

SlimySnake SlimySnake
 
Last edited:
GitHub makes absolutely no sense because it would be the most inefficient and costly way to get to 9 TF

No sense at all.

If the coooling solution is real it's because they've already maxed out die size and are increasing the clocks simply because they have the headroom.

SlimySnake SlimySnake
But that's where I read the backwards compatibility issue comes into play. It looks like they have to go with 36 CU's to match the PS4 and PS4 Pro for backwards compatibility. And this CU count makes the PS5 around 9 TF. I have no clue why they can't use a higher CU count and deactivate the ones they don't need during backwards compatibility but that's what I have read.
 
A game that has six letters and ends with an O. I can only think of Sunset Overdrive. But what he says doesn't fit that and I think that franchise is dead anyways
 
Last edited:
But that's where I read the backwards compatibility issue comes into play. It looks like they have to go with 36 CU's to match the PS4 and PS4 Pro for backwards compatibility. And this CU count makes the PS5 around 9 TF. I have no clue why they can't use a higher CU count and deactivate the ones they don't need during backwards compatibility but that's what I have read.

No they don't. You can simply disable CUs
 
Having seen 000000 O running I'll just say this. Games > Flops. The lighting in this game is beyond my ability to explain. Firing a shotgun in a dark room seeing the muzzle shot light up the room and the accompanying smoke wafting around leaving trails that also react to the light and has actual weight. Everything just seems real.

Socom???
 
GitHub makes absolutely no sense because it would be the most inefficient and costly way to get to 9 TF

No sense at all.

If the coooling solution is real it's because they've already maxed out die size and are increasing the clocks simply because they have the headroom.

SlimySnake SlimySnake


Well actually it kinda makes sense if that is how they are trying to hit 9tf when you think about it. Would it be costlier to use 2 extra cu's (38) or to invest in a good cooling system. With 38 cu's theres a bigger chance more dies are gonna come out unusable as opposed to 36 with 4 disabled that would be well costlier.

The reason for the expensive cooling I believe is they are actually gonna attempt to hit 2ghz which is insane and if they pull it off and it's kept cool then they will have set a new benchmark for console cooling.

So yeah to me if they only got 40cus to work with it does make more sense to up clock to gambling on what comes out as a usable apu
 
Well actually it kinda makes sense if that is how they are trying to hit 9tf when you think about it. Would it be costlier to use 2 extra cu's (38) or to invest in a good cooling system. With 38 cu's theres a bigger chance more dies are gonna come out unusable as opposed to 36 with 4 disabled that would be well costlier.

The reason for the expensive cooling I believe is they are actually gonna attempt to hit 2ghz which is insane and if they pull it off and it's kept cool then they will have set a new benchmark for console cooling.

So yeah to me if they only got 40cus to work with it does make more sense to up clock to gambling on what comes out as a usable apu

Why go with 40 CUs and bad yields when you can just increase the CU count?

It's not dramatically more expensive for a 60 CU chip
 
Why go with 40 CUs and bad yields when you can just increase the CU count?

It's not dramatically more expensive for a 60 CU chip


I dont work for AMD obviously so cant confirm if a 60 vs 40cu chip is much more costlier, But i only say 38 cu as an example because if it is a 40cu total chip it would still leave 2 nonactive ones which would obviously be a better yeild than using full 40 better yet 36 will yield more than 38. Im just basing my comment on github. I dont know wether to believe if its been changed or not.

All i can say is i believe the github leak wasnt fake and that's what my comment was based on
 
Well actually it kinda makes sense if that is how they are trying to hit 9tf when you think about it. Would it be costlier to use 2 extra cu's (38) or to invest in a good cooling system. With 38 cu's theres a bigger chance more dies are gonna come out unusable as opposed to 36 with 4 disabled that would be well costlier.

The reason for the expensive cooling I believe is they are actually gonna attempt to hit 2ghz which is insane and if they pull it off and it's kept cool then they will have set a new benchmark for console cooling.

So yeah to me if they only got 40cus to work with it does make more sense to up clock to gambling on what comes out as a usable apu

Actually, this isn't true. First thing to bear in mind is that increasing clock-speed has a far more deleterious effect on manufacturing yield, as aside from physical defects in the fabrication process, chips will still need to be validated to ensure they can run at the target clock-speed within a given voltage threshold (those that can't are rejected).

A 2GHz 36CU chip will be likely to have terrible yields, especially on a vanilla 7nm process node (we can already see this by the 5700xt and 5600xt's clocks versus power consumption curve).

Increasing die size is the much safer bet and you can check the difference an increased die size would make to the APU cost making a few good educated guesses.

Using the die per wafer calculator here, a 36CU (40 total with 2 WGPs disabled), a 312 mmsq die produces 121 good wafers out of 177; corresponding to a fab yield of 73.8%.

Increasing the CU count to 52 total (48CUs enabled), gives a rough die size increase of around 48 mmsq, giving 104 good dies out of 148; thus a 70.5% yield.

So the reduction in manufacturing yield is only 3.3%. Assuming $11,000 per wafer (Sony and MS will pay less as they're buying in much bigger volumes) the cost breakdown per die is:

36CU/40CU ====> $90.90 per die
48CU/52CU ====> $105.77 per die so a mere 16% increase in cost to fab the chip.

Meanwhile, the cost of the cooling solution will be drastically reduced AND you will still be able to clock high enough to reach a competitive performance level while maintaining good yields on validated chips.

Note:
The above analysis does not take into account chip validation. Meaning in reality a further reduction in yield due to manufacturing variability meaning some chips aren't able to meet the target clocks within an adequate voltage threshold and thus will be rejected. With XB1X MS invented the Hovis Method to get around this, by tailoring the power profile of other PCB components to target an overall system power consumption and TDP limit (thus allowing more APU chips to pass validation). I do suspect however, that this method will not scale to the 10s of million consoles per year that Sony & MS will want to deliver with their base consoles.
 
Actually, this isn't true. First thing to bear in mind is that increasing clock-speed has a far more deleterious effect on manufacturing yield, as aside from physical defects in the fabrication process, chips will still need to be validated to ensure they can run at the target clock-speed within a given voltage threshold (those that can't are rejected).

A 2GHz 36CU chip will be likely to have terrible yields, especially on a vanilla 7nm process node (we can already see this by the 5700xt and 5600xt's clocks versus power consumption curve).

Increasing die size is the much safer bet and you can check the difference an increased die size would make to the APU cost making a few good educated guesses.

Using the die per wafer calculator here, a 36CU (40 total with 2 WGPs disabled), a 312 mmsq die produces 121 good wafers out of 177; corresponding to a fab yield of 73.8%.

Increasing the CU count to 52 total (48CUs enabled), gives a rough die size increase of around 48 mmsq, giving 104 good dies out of 148; thus a 70.5% yield.

So the reduction in manufacturing yield is only 3.3%. Assuming $11,000 per wafer (Sony and MS will pay less as they're buying in much bigger volumes) the cost breakdown per die is:

36CU/40CU ====> $90.90 per die
48CU/52CU ====> $105.77 per die so a mere 16% increase in cost to fab the chip.

Meanwhile, the cost of the cooling solution will be drastically reduced AND you will still be able to clock high enough to reach a competitive performance level while maintaining good yields on validated chips.

Note:
The above analysis does not take into account chip validation. Meaning in reality a further reduction in yield due to manufacturing variability meaning some chips aren't able to meet the target clocks within an adequate voltage threshold and thus will be rejected. With XB1X MS invented the Hovis Method to get around this, by tailoring the power profile of other PCB components to target an overall system power consumption and TDP limit (thus allowing more APU chips to pass validation). I do suspect however, that this method will not scale to the 10s of million consoles per year that Sony & MS will want to deliver with their base consoles.


Yet the word is they have put the dollars into the cooling system can either mean they have the headroom or they are trying to hit there mark.

Yeah its the safer bet but is it what sony went with like i said im just going by the old github leak. More cu's is way safer i agree whether they have done that or not is unknown to us. Under assumption github is 100% and is indeed 40 total cu's they arent gonna use all the cu's thats a given. And i say old github leak we dont know what changes have been made. The APU'S being tested in the leak were tested at 2ghz and seemed to pass the test that we could see. So we can gather either 2ghz is the mark for perhaps for devkits only at that time?

I should say for the record I dont believe they will hit 2ghz maybe the test we have seen were just pushing the chips to there limits.
 
Last edited:
Yet the word is they have put the dollars into the cooling system can either mean they have the headroom or they are trying to hit there mark.

Yeah its the safer bet but is it what sony went with like i said im just going by the old github leak. More cu's is way safer i agree whether they have done that or not is unknown to us. Under assumption github is 100% and is indeed 40 total cu's they arent gonna use all the cu's thats a given. And i say old github leak we dont know what changes have been made. The APU'S being tested in the leak were tested at 2ghz and seemed to pass the test that we could see. So we can gather either 2ghz is the mark for perhaps for devkits only at that time?

I should say for the record I dont believe they will hit 2ghz maybe the test we have seen were just pushing the chips to there limits.

Why couldn't it be a combination of factors? Let's say even with a larger chip and more CU's.....what if Sony wants to stay in the same size range as the current PS4 PRO as far as the case goes? With a larger tower style case like MS is using, you could have better ventilation cooling using standard heatsink/fans combination. But if putting the same range of equipment into a much smaller case like the PS4 PRO, you might need more than that to cool it off enough. So couldn't the cooling system simply be needed due to a smaller form factor and design being used and not necessarily be related to a chip being overclocked? Food for thought. Obviously, the answer is "Yes," and this fits in as well with what people have said about Sony being very size conscious with their products due to the Asian market, etc.
 
Question for everyone here for those with gaming PC'S

Does anyone plan on buying a XSX or do you plan on playing xbox on your pc's?

Ive decided on ps5 on day one and have built a new pc all of it is new bar the gpu *970* im waiting to see what is released by nvidia and amd this year with that in mind do i wait it out or just get both consoles AHHHHHHHH
 
Yet the word is they have put the dollars into the cooling system can either mean they have the headroom or they are trying to hit there mark.

Yeah its the safer bet but is it what sony went with like i said im just going by the old github leak. More cu's is way safer i agree whether they have done that or not is unknown to us. Under assumption github is 100% and is indeed 40 total cu's they arent gonna use all the cu's thats a given. And i say old github leak we dont know what changes have been made. The APU'S being tested in the leak were tested at 2ghz and seemed to pass the test that we could see. So we can gather either 2ghz is the mark for perhaps for devkits only at that time?

People should really stop equating "expensive cooling solution" == "high clockspeed".

Expensive cooling solution implies high power consumption for the main processor(s). This could be through increasing clockspeed or simply opting for a bigger GPU.

The fact that Sony is reported to have a more elaborate cooling solution for PS5 than any previous console, only points to the suggestion that PS5 is likely to consume significantly more power than any previous console (whatever that implies).

We could be easily looking at a 60CU GPU clocked lower to maximise production yields (typically how it's done in consoles), needing to dissipate something of the order of 150 to 180 watts for the CPU/GPU+RAM alone.

To anyone that's been following closely AMD's GPU micro-architectural technology and semi-conductor process technology advances and the corresponding improvements in performance per watt of fabricated dies, it's become pretty obvious that both MS and Sony would be forced to increase their overall TDP budgets for the next-gen consoles from the 100 to 125 watt limits employed in this gen's consoles; especially if they wanted to achieve a typical generational 8-10x jump in raw GPU performance.

In which case, the reports on PS5's expensive cooling solution and the physical size of the XSX chassis are far less surprising.

I will say that I'm also sceptical of the final PS5 being clocked at 2GHz. I cannot said I understand what the Github data is, considering the lack of context, however, looking at how bizarre the specs for the Oberon GPU are, I would presume the silicon tested may simply be some test platform used to check the BC modes, or perhaps a defective chip with more CUs fused that has been clocked high to simulate the final chip's overall performance.
 
Last edited:
Why couldn't it be a combination of factors? Let's say even with a larger chip and more CU's.....what if Sony wants to stay in the same size range as the current PS4 PRO as far as the case goes? With a larger tower style case like MS is using, you could have better ventilation cooling using standard heatsink/fans combination. But if putting the same range of equipment into a much smaller case like the PS4 PRO, you might need more than that to cool it off enough. So couldn't the cooling system simply be needed due to a smaller form factor and design being used and not necessarily be related to a chip being overclocked? Food for thought. Obviously, the answer is "Yes," and this fits in as well with what people have said about Sony being very size conscious with their products due to the Asian market, etc.


Yeah exactly right another thing we need to keep in mind is indeed the form factor. I just think if they manage 2ghz with any amount of cu's they have decided to go with then the cooling they have built for it is gonna be a beautiful design and will set standards.
 
People should really stop equating "expensive cooling solution" == "high clockspeed".

Expensive cooling solution implies high power consumption for the main processor(s). This could be through increasing clockspeed or simply opting for a bigger GPU.

The fact that Sony is reported to have a more elaborate cooling solution for PS5 than any previous console, only points to the suggestion that PS5 is likely to consume significantly more power than any previous console (whatever that implies).

We could be easily looking at a 60CU GPU clocked lower to maximise production yields (typically how it's done in consoles), needing to dissipate something of the order of 150 to 180 watts for the CPU/GPU+RAM alone.

To anyone that's been following closely AMD's GPU micro-architectural technology and semi-conductor process technology advances and the corresponding improvements in performance per watt of fabricated dies, it's become pretty obvious that both MS and Sony would be forced to increase their overall TDP budgets for the next-gen consoles from the 100 to 125 watt limits employed in this gen's consoles; especially if they wanted to achieve a typical generational 8-10x jump in raw GPU performance.

In which case, the reports on PS5's expensive cooling solution and the physical size of the XSX chassis are far less surprising.

Again ive only based my response on the github leaks. So thats my reason for good cooling with the 2ghz because we know they have tested for 2ghz,

I do wonder about MS cooling though they got a really good simple design im still unsure how they have even put the motherboards in the dam thing
 
I dont work for AMD obviously so cant confirm if a 60 vs 40cu chip is much more costlier, But i only say 38 cu as an example because if it is a 40cu total chip it would still leave 2 nonactive ones which would obviously be a better yeild than using full 40 better yet 36 will yield more than 38. Im just basing my comment on github. I dont know wether to believe if its been changed or not.

All i can say is i believe the github leak wasnt fake and that's what my comment was based on

Sony already confirmed they used a low spec device to run PS4 games in order to test out the SSD speed.

Can they go lower than 36CU to test out PS4 games? If the answer is no (And it is because that matches the Pro count) then it's obvious what the github information was referencing. (Providing it's true)
 
Again ive only based my response on the github leaks. So thats my reason for good cooling with the 2ghz because we know they have tested for 2ghz,

I do wonder about MS cooling though they got a really good simple design im still unsure how they have even put the motherboards in the dam thing

I think the Github data lacks enough context to presume anything about it.

It's like finding a scrap of paper on the bus after seeing the UK Prime Minister Boris Johnson alighting. Reading the words "China" on it, and jumping to the conclusion that Britain is poised to kick off world war three as a result of Chinese espionage on British Intelligence.

Regardless of how authentic the info you have is. Without the proper context within which to frame it, your speculation based off it is no more reliable than fan-fiction.
 
Last edited:
Sony already confirmed they used a low spec device to run PS4 games in order to test out the SSD speed.

Can they go lower than 36CU to test out PS4 games? If the answer is no (And it is because that matches the Pro count) then it's obvious what the github information was referencing. (Providing it's true)


No ofcourse not i believe the cerny patent only says it can disable new cpu features and lower clock speeds and cu's ( standard ps4) for BC. I mean it wouldnt be a huge shock if the whole github test was indeed purely for BC and nothing else.
 
People should really stop equating "expensive cooling solution" == "high clockspeed".

Expensive cooling solution implies high power consumption for the main processor(s). This could be through increasing clockspeed or simply opting for a bigger GPU.

The fact that Sony is reported to have a more elaborate cooling solution for PS5 than any previous console, only points to the suggestion that PS5 is likely to consume significantly more power than any previous console (whatever that implies).

We could be easily looking at a 60CU GPU clocked lower to maximise production yields (typically how it's done in consoles), needing to dissipate something of the order of 150 to 180 watts for the CPU/GPU+RAM alone.

To anyone that's been following closely AMD's GPU micro-architectural technology and semi-conductor process technology advances and the corresponding improvements in performance per watt of fabricated dies, it's become pretty obvious that both MS and Sony would be forced to increase their overall TDP budgets for the next-gen consoles from the 100 to 125 watt limits employed in this gen's consoles; especially if they wanted to achieve a typical generational 8-10x jump in raw GPU performance.

In which case, the reports on PS5's expensive cooling solution and the physical size of the XSX chassis are far less surprising.

I will say that I'm also sceptical of the final PS5 being clocked at 2GHz. I cannot said I understand what the Github data is, considering the lack of context, however, looking at how bizarre the specs for the Oberon GPU are, I would presume the silicon tested may simply be some test platform used to check the BC modes, or perhaps a defective chip with more CUs fused that has been clocked high to simulate the final chip's overall performance.

EXACTLY. Form factor is actually what I think is most likely driving this 'cooling solution' if that is accurate. As others have pointed out, clocking the systems that high (like 2ghz) is surely POSSIBLE, but really not something you WANT to do for reliability. So more likely because you're putting a lot of power into a small form factor as opposed to the large form factor MS seems to be running with. Frankly, I think Sony will end up forced to go to a larger form factor in the future, just as the tech continues to evolve, but at least for now, that might give them the ability to squeeze a lot of power into a nice, trim little package.
 
I think the Github data lacks enough context to presume anything about it.

It's like finding a scrap of paper on the bus after seeing the UK Prime Minister Boris Johnson alighting. Reading the words "China" on it, and jumping to the conclusion that Britain is poised to kick off world war three as a result of Chinese espionage on British Intelligence.

Regardless of how authentic the info you have is. Without the proper context within which to frame it, your speculation based off it is no more reliable than fan-fiction.


Indeed its the same as believing insiders but its all we have to go off ahaha, the github leak did coincide with cerny's patent on BC.

well im off to work to start saving for these im sure expensive machines have a good one mate
 
No ofcourse not i believe the cerny patent only says it can disable new cpu features and lower clock speeds and cu's ( standard ps4) for BC. I mean it wouldnt be a huge shock if the whole github test was indeed purely for BC and nothing else.

I'm just saying if github were real then it's a perfect match for the silver tower they used to demo Spider-Man on which was not a PS5 devkit.
 
Actually, this isn't true. First thing to bear in mind is that increasing clock-speed has a far more deleterious effect on manufacturing yield, as aside from physical defects in the fabrication process, chips will still need to be validated to ensure they can run at the target clock-speed within a given voltage threshold (those that can't are rejected).

A 2GHz 36CU chip will be likely to have terrible yields, especially on a vanilla 7nm process node (we can already see this by the 5700xt and 5600xt's clocks versus power consumption curve).

Increasing die size is the much safer bet and you can check the difference an increased die size would make to the APU cost making a few good educated guesses.

Using the die per wafer calculator here, a 36CU (40 total with 2 WGPs disabled), a 312 mmsq die produces 121 good wafers out of 177; corresponding to a fab yield of 73.8%.

Increasing the CU count to 52 total (48CUs enabled), gives a rough die size increase of around 48 mmsq, giving 104 good dies out of 148; thus a 70.5% yield.

So the reduction in manufacturing yield is only 3.3%. Assuming $11,000 per wafer (Sony and MS will pay less as they're buying in much bigger volumes) the cost breakdown per die is:

36CU/40CU ====> $90.90 per die
48CU/52CU ====> $105.77 per die so a mere 16% increase in cost to fab the chip.

Meanwhile, the cost of the cooling solution will be drastically reduced AND you will still be able to clock high enough to reach a competitive performance level while maintaining good yields on validated chips.

Note:
The above analysis does not take into account chip validation. Meaning in reality a further reduction in yield due to manufacturing variability meaning some chips aren't able to meet the target clocks within an adequate voltage threshold and thus will be rejected. With XB1X MS invented the Hovis Method to get around this, by tailoring the power profile of other PCB components to target an overall system power consumption and TDP limit (thus allowing more APU chips to pass validation). I do suspect however, that this method will not scale to the 10s of million consoles per year that Sony & MS will want to deliver with their base consoles.


very good post.


you didn't take die space required for ray tracing into account. we know from the XSX die shot that it's probably around 400mm². i think you should start your deduction from that point.
 
Last edited:
Well hex 000000 means Black. If that's what OsirisBlack OsirisBlack meant.

So Black O.? Black Ops? A new Black game?
Yes I'm quoting myself.
Wait a minute. Black O.... Black Osiris? Osiris Black?!

giphy.gif
 
EXACTLY. Form factor is actually what I think is most likely driving this 'cooling solution' if that is accurate. As others have pointed out, clocking the systems that high (like 2ghz) is surely POSSIBLE, but really not something you WANT to do for reliability. So more likely because you're putting a lot of power into a small form factor as opposed to the large form factor MS seems to be running with. Frankly, I think Sony will end up forced to go to a larger form factor in the future, just as the tech continues to evolve, but at least for now, that might give them the ability to squeeze a lot of power into a nice, trim little package.

Form factor can have a really big impact on cooling. Absolutely. Consoles tend to have to strike a delicate balance between thermals, acoustics and cost.

A large console chassis can facilitate better air flow (e.g. bigger fans) while maintaining a desirable acoustic profile, allow for increased finned area on the heat sink for improved heat transfer and just plain provide more options for internal layout. On the flip side, a bigger case imposes a greater materials cost, as well as shipping and handling costs for storage and distribution.

A smaller case benefits aesthetics, materials, shipping and handling costs.

For a high TDP console APU design (whether by clock-speed or larger chip, it's immaterial), a larger form factor is the easiest way to go, but the costs will mount up depending on how large you tread.

It's possible to design for a smaller form factor, but it's harder to dissipate the heat in a smaller more cramped chassis, as although you benefit from higher air velocities, which increase heat transfer, the more tortuous air flow path increases resistance to flow, meaning higher load on the fans and a worse acoustic profile. You're also limited on the amount of space you have for the heat sink, so less finned area can render the design even further sub-optimal, meaning you need to increase fan speed and head further in order to overcome the reduction in finned area on the heat sink (and even then there are physical limits you will start pushing against).

If PS5 is targeting a substantially higher TDP than PS4/Pro, then a bigger form factor is almost a necessity; unless you do something truly radical with your heat sink design (like what's shown in the Sony patent).

Indeed its the same as believing insiders but its all we have to go off ahaha,

I'm afraid, I don't believe it is the same.

The insiders tend to speak fairly plainly and qualify their statement with the correct context. So whether you believe them or not, at least you can clearly understand what they are saying about the next-gen consoles.

The Github leak requires far too much assumption and speculation in order to interpret it.

very good post.


you didn't take die space required for ray tracing into account. we know from the XSX die shot that it's probably around 400mm². i think you should start your deduction from that point.

Actually, my die area estimates are based off Liara Brave's numbers from ResetEra. He takes RT area into account. Incidentally it isn't expected to be much:

a) Because the area devoted to RT cores for BVH intersection acceleration in RTX aren't much (Liara Brave increases each CU footprint by the same proportion)
and b) from AMD's patent on their own projected RT hardware implementation, it appears they're putting the BVH acceleration hardware in the RBEs, while at the same time it appears not to be as comprehensive in design as RTX's implementation. So the prevailing estimation is that it should take less area.

In which case, my die area estimates would be considered somewhat conservative.

In any case, that post was merely intended to be indicative of the relative difference in per die cost for going with a larger die with more shader cores, than a faster smaller die. It's only intended to be indicative.
 
Last edited:
Sony is in a weird spot imo ! They would absolutely shout this is the best place to play! They don't show much confidence these days. It's weird, a complete 180 gram last gen.
 
Sony is in a weird spot imo ! They would absolutely shout this is the best place to play! They don't show much confidence these days. It's weird, a complete 180 gram last gen.

To be fair, they haven't yet officially fully revealed their console. So I would pn't expect them to say anything until they have.

I would, however, argue that the fact that they went first revealing what details they did in the two wired articles shows confidence.

I think they're just not quite ready to ramp up their marketing plans for PS5. When they are they will.
 
Status
Not open for further replies.
Top Bottom