PS4K information (~2x GPU power w/ clock+, new CPU, price, tent. Q1 2017)

LiquidVR.jpg


44766_03_amd-radeon-r9-395x2-dual-gpu-fiji-vr-monster_full.jpg

No I mean how is Sony going to fit and budget and additional GPU in an APU form. Would it just be a larger board with another processing unit identical to the first running in parallel?
 
No I mean how is Sony going to fit and budget and additional GPU in an APU form. Would it just be a larger board with another processing unit identical to the first running in parallel?

If they are going from 28nm to 14nm & removing redundant parts from the PS4 hardware they should be able to fit a second chip. it might even be stacked like the Vita chip.
 
If they are going from 28nm to 14nm & removing redundant parts from the PS4 hardware they should be able to fit a second chip. it might even be stacked like the Vita chip.



The second GPU thing makes no sense except for fan wet dreams.
There's something you don't understand, the multi-GPU VR thing is here for people owning multiple GPU because there's no chip on the market fast enough for their need.

On the PS4 case ? You can bet PS4K won't even be on par with a single R9 290. What's the point in getting 2 times of the same GPU when you could fit a bigger 32CU part at a higher clockspeed ?

You can throw PR slides, patents and such. It doesn't translate into anything for real performances. And the fact is, a two times bigger GPU will always be better and more reliable than two smaller chips.
 
If they are going from 28nm to 14nm & removing redundant parts from the PS4 hardware they should be able to fit a second chip. it might even be stacked like the Vita chip.
What kind of "redundant" parts?

Also, stacking a non-mobile chip (80-100 watts) on top of another one? Are you trying to pull a misterxmedia or what?
 
The second GPU thing makes no sense except for fan wet dreams.
There's something you don't understand, the multi-GPU VR thing is here for people owning multiple GPU because there's no chip on the market fast enough for their need.

On the PS4 case ? You can bet PS4K won't even be on par with a single R9 290. What's the point in getting 2 times of the same GPU when you could fit a bigger 32CU part at a higher clockspeed ?

You can throw PR slides, patents and such. It doesn't translate into anything for real performances. And the fact is, a two times bigger GPU will always be better and more reliable than two smaller chips.
Simple really, it maximizes compatibility with existing PS4 games while being a non factor for devs who do not wish to customize their game to 2 different hardware profiles.
I'd say the ones with a "wet dream" here are the ones thinking that the PS4K will be a big break in power, and adopt a single much stronger GPU. In fact, if the PS4K go as far as adopting Polaris I'd be rather astonished.

The sooner people realize the PS4K is probably meant strictly for the 4K TV users and might provide little to no benefits to 1080p users aside from a few optimized games, the better to avoid being disappointed (I can already see the threads around E3: "WTF Sony, what's the point in releasing a PS4K if I can't play all my PS4 games in 1080p60!!!")
 
Multi GPU could be the future. Die stacking in general though, definitely.

For people wondering why multi anything, start reading this AMD paper from page 36:
http://www.microarch.org/micro46/files/keynote1.pdf

j0rlEfE.jpg


Smple really, it maximizes compatibility with existing PS4 games while being a non factor for devs who do not wish to customize their game to 2 different hardware profiles.

Yeah that's what I was saying yesterday. Plus I'm pretty sure there's manufacturing benefits if done right.
 
No I mean how is Sony going to fit and budget and additional GPU in an APU form. Would it just be a larger board with another processing unit identical to the first running in parallel?
Even if they manage to fit it why would they? If I recall correctly Valve's latest benchmark have shown that rendering left and right on seperate GPUs only results in a 35% vs rendering both sides on one GPU.

There's still a lot of redundant work that both GPUs need to perform in order to generate the final frame that can't be magically taken out of the equation.

Dual GPUs in consoles doesn't make any sense. Two GPUs would have a lot of redundant parts, the memory pool(three processors accessing the same memory pool at once) would be a nightmare, etc. But it doesn't surprise me to see who came with the idea.
 
Even if they manage to fit it why would they? If I recall correctly Valve's latest benchmark have shown that rendering left and right on seperate GPUs only results in a 35% vs rendering both sides on one GPU.

There's still a lot of redundant work that both GPUs need to perform in order to generate the final frame that can't be magically taken out of the equation.

Dual GPUs in consoles doesn't make any sense. Two GPUs would have a lot of redundant parts, the memory pool(three processors accessing the same memory pool at once) would be a nightmare, etc. But it doesn't surprise me to see who came with the idea.
This.

Usually it's more efficient to have GPU with double amount of units than it is to have two of them and having to orcestrate them seperately.
 
Smple really, it maximizes compatibility with existing PS4 games while being a non factor for devs who do not wish to customize their game to 2 different hardware profiles.
I'd say the ones with a "wet dream" here are the ones thinking that the PS4K will be a big break in power, and adopt a single much stronger GPU. In fact, if the PS4K go as far as adopting Polaris I'd be rather astonished.

The sooner people realize the PS4K is probably meant strictly for the 4K TV users and might provide little to no benefits to 1080p users aside from a few optimized games, the better to avoid being disappointed (I can already see the threads around E3: "WTF Sony, what's the point in releasing a PS4K if I can't play all my PS4 games in 1080p60!!!")



It's not because you say "simple" that it becomes simple. GCN architecture is pretty brillant with scalibility. It's easier to have twice more compute units and disable them for compatibility purpose than the opposite. Even though it would be stupid to disable anything. You developp for an OS, an API. This makes things easier and run flawlessly on next hardwares.

As customizing for two profiles, this is non sense. You developp for the lowest common denominator and you let the hardware muscle of the better hardware doing the rest.
 
All this 2nd GPU is crazy talk, Sony have a single screen 1080p PSVR coming out this Xmas and some are already talking about the 2 screens and wireless.

Practicality and logic out of the window.

Ps4.5 needs to address the ps4 weakness, CPU and RAM bandwidth / access speeds. Sony has been very logical and practical of late....I cannot see them going crazy again.

I would not be surprised to see a beefed up CPU and GDDR5X, the GPU could up to double but its not the weak point at all imo.
 
Upscaling is already incorporated into most 4k tvs. It would be pointless to have the hardware do what the tv already can.
Xtensa is an accelerator for a lot of general multimedia applications, of course one is going to be included in the next iterations of the consoles as they have to handle all different quality levels of media that will need it anyway. That said if the system is already outputting native 1080p or 4k there is no point in wasting the the extra .5 watt or so to upscale that when there is a better processor for that task in the TV.
Actually it being 'pointless' is really not true at all. There are reasons for a developer and system manufacturer to want the scaling done internally. Particularly when taking them in total. Internal scaling offers the following:


Potential improvements to lag: We are still early days of 4K sets. This is particularly true when considering true UHD sets (support for 10-bit+ color, HDR, and wider color gamuts). The scaling speed of these sets can vary quite a bit. Some offer gaming modes, some do not, etc, but for the most part the CE's are going to weigh image quality - not rendering speed - as the higher priority in their infancy. They want their TV to stand out in a showroom, and I can't imagine optimizing lag is anywhere near the top of the list of concerns for most of them. Not at this juncture at least.

Typically though their processing pipelines will do the least - and therefore display the fastest - when given a 4K signal. This is really no different than when we transitioned to 'True HD' sets back in the day. The PS3 was chastised both in terms of image quality and lag by being reliant on the display to do its scaling.



Potential improvements to picture quality - or at least consistency: This is similar to the above conceptually. If the console does the scaling you are guaranteeing a base level of quality. And arguably more importantly, you are guaranteeing consistency. The same algorithms will be used all the time. It it not left up to the CE to figure out the 'best' way to scale the image.

Moreover this also offers the potential to fine tune things specifically for gaming. The final pipelines of the Xbox 360 (Avivo engine inside of Xenos) handled its scaling but also offered other processing to the image. Granted many would (rightfully) argue that this is a bad example given MS went nuts with artificial edge enhancement, sharpening, etc ... but the point stands. In the hands of better designers at least, they can pick and choose the best ways to improve the image as far as post-processing for gaming. A TV on the other hand? No.



Avoidance of the dreaded double scaling: This one is a biggie in terms of pure image quality. There is a simple rule when it comes to image processing - keep everything at a higher quality level in your intermediate steps in order to avoid round off, error accumulations, etc. And a very obvious situation where this issue sees the light of day is double scaling.

Consumer displays have a fixed set of resolutions they will accept (480i/p, 720p, 1080i/p, and 2160p). Think of this simple example ... an original PS4 title that has not been patched to directly support native 4K output. It is internally rendering at 900p and then scaled by the GPU to 1080p for output. As far as developer's original intentions, that is the final expected output. A given frame has been decimated for Rec 709 output. Now the TV is going to take that finished frame and rescale it to 2160p. The intermediate higher bit-depth image isn't there ... you're scaling something that's already been 'finalized'*. Then on top of this you are now scaling the image using a totally different algorithm. So not only are you losing information in between, you're also using multiple scaling algorithms. The latter of which the developer / console maker has no control nor knowledge of.

Take another example. Let's say the developer's PS4K version is targeting a native resolution higher than 1080p. Obviously it doesn't have the horsepower to do native 4K, so it will be something in between. Again the display does not know how to handle this non-standard resolution. If the PS4K is not capable of scaling the final output to 2160p, it would instead have to downsample to 1080p, and then the display will scale the 1080p image to 2160p. Now all the same caveats above (and a few more) also apply.







* A PS4 dev would have to answer whether if you set deep color this can be avoided. It's possible you could at least maintain some or all of the intermediate step. Whether that can be automated though is another story. And it still doesn't avoid the issue of using a different algorithm for the secondary scaling.
 
PS4 is at 40 million in 2 years plus , Sony don't give a damn about a NX or Xbox right now this is part of a bigger deal that includes 4K TV streaming , 4K Movie streaming , 4K Blu-rays , 4K TVs & so on.

So much this, lol. The world is much larger than the warz people. The end game is has always been greater.
 
Jeff is telling you about the same thing I told people about before but no one listens you can even see that he credit me in some of the post.

Stop. Don't even begin to think you are redeemed by Jeffs post. He is not agreeing with you just because he referenced that slide while talking specifically about upscaling.

First off, your posts and constant quoting of patents and presentation slides are nowhere near the quality of Jeffs deliberate, researched, and and thoroughly explained posts.

Second, the Xtensa processor Jeff is talking about is intended to accelerate multi-media applications such as voice recognition, image processing (like upscaling), and audio processing.

What little GPU processing the Xtensa might be capable of is likely delegated to the PS4s OS and themes which is a separate environment. Again, you are more than an order of magnitude off (closer to 2 this time).

Possibly but TVs have power restrictions that the Game Console or Computer don't have so they (game consoles and Computers) can do more processing. This is why Kaveri does it rather than leaving it to the TV. Edit: Kaveri has a HDMI port in case you tried to argue that Kaveri is usually outputting to a monitor with no video processing. 4K TVs are the best monitor.

AMD has stated that the same hardware used for HEVC codecs in the XB1 is used in AMD UVD and that is the Xtensa accelerator. It can do much much more and AMD has had them in APUs since 2010 along with the Trustzone processor as Xtensa processors/DPUs need a AXI ARM bus. ALL AMD APUs can support gesture recognition using the Xtensa accelerator. Starting with Kaveri (UVD 4.2) it can support HEVC and Carrizo (UVD 6) can support HEVC with a duty cycle allowing it to turn off part of the time while processing a HEVC video stream.

Entirely agreed, but many 4kTVs have quad, hexa, and some even octa core processors entirely dedicated to image quality improvements along with other image processing chipsets that blow Xtensa out of the water. In general the next revision of Xtensa that would go into Sony's next system would likely still be dedicated to the OS.

Xtensa has a small portion dedicated to image processing, and itself is a very small portion of the APU. It simply doesn't have the processing power to handle the triilions of floating point operations per second required by the matrix operations used for rasterizing a modern game at 4k as onQ is suggesting towards.

I agree with you, modern engines are quite likely not to make the same usage of their video memory as the PS2 ones made of GS's eDRAM so this technique is likely over engineered for that. In terms of the scaler chip being built in or not/enabled or not for 1080p to 4K scaling, it is true that Sony has great TV's to sell, but if I cared a lot about my device video quality (maybe part of my unique selling points) I would prefer to do the scaling on chip and send a reliably great signal to the TV instead of trusting an unknown chip on the Unknown TV.

As a ECE graduate, I sympathise with lovers of both EE and CS fields :). Researched posts are appreciated and the intention is good.

Good point, but if a TVs post processing is going to ruin the image, it's still going to ruin the image unless the user takes the time to turn off some of those settings. The fact we are talking about a potential high end game console further exacerbates the likely hood the user would have a quality 4kTV. The chip is still going to be included regardless to maintain compatibility with the original PS4.

I think this whole shrunk APU + new GPU (or another APU?) solution is very unlikely, but I take more issue that people are pointing to an accelerator as a potential source of rendering capabilities for modern games. There is a huge discontinuity in knowledge on the subject of rasterization for people who can delve so deeply into speculation on the chipset used to upscale the image immediately afterward.

Actually it being 'pointless' is really not true at all. There are reasons for a developer and system manufacturer to want the scaling done internally. Particularly when taking them in total. Internal scaling offers the following:

I was directly referencing only the use of native resolutions so a lot of that is a wash. Like I said earlier some variation of Xtensa will be included to handle content of varying quality and compatibility anyway.

Anyway the PS4 should be able to detect the TV and know if the TVs processor is a better handle for the job or not. I don't expect a TV to ever handle non-native resolution scaling. Good points on the subject anyways.
 
Stop. Don't even begin to think you are redeemed by Jeffs post. He is not agreeing with you just because he referenced that slide while talking specifically about upscaling.

First off, your posts and constant quoting of patents and presentation slides are nowhere near the quality of Jeffs deliberate, researched, and and thoroughly explained posts.

Second, the Xtensa processor Jeff is talking about is intended to accelerate multi-media applications such as voice recognition, image processing (like upscaling), and audio processing.

What little GPU processing the Xtensa might be capable of is likely delegated to the PS4s OS and themes which is a separate environment. Again, you are more than an order of magnitude off (closer to 2 this time).



Entirely agreed, but many 4kTVs have quad, hexa, and some even octa core processors entirely dedicated to image quality improvements along with other image processing chipsets that blow Xtensa out of the water. In general the next revision of Xtensa that would go into Sony's next system would likely still be dedicated to the OS.

Xtensa has a small portion dedicated to image processing, and itself is a very small portion of the APU. It simply doesn't have the processing power to handle the triilions of floating point operations per second required by the matrix operations used for rasterizing a modern game at 4k as onQ is suggesting towards.



Good point, but if a TVs post processing is going to ruin the image, it's still going to ruin the image unless the user takes the time to turn off some of those settings. The fact we are talking about a potential high end game console further exacerbates the likely hood the user would have a quality 4kTV. The chip is still going to be included regardless to maintain compatibility with the original PS4.

I think this whole shrunk APU + new GPU (or another APU?) solution is very unlikely, but I take more issue that people are pointing to an accelerator as a potential source of rendering capabilities for modern games. There is a huge discontinuity in knowledge on the subject of rasterization for people who can delve so deeply into speculation on the chipset used to upscale the image immediately afterward.


I don't need any redemption
 
No I mean how is Sony going to fit and budget and additional GPU in an APU form. Would it just be a larger board with another processing unit identical to the first running in parallel?
The assumed reason a supposed PS4K is even viable is that AMD will have the capability of producing its new APU design using a smaller node size.


Historically when a smaller node size is available the manufacturers move to it in order to shrink the console and lower the price (or generate higher margins). Once all the details have been hammered out and yields are at good levels, there's a linear cost relationship to the physical size of the chip and its manufacturing cost. In other words a processor that does the exact same thing but is using a smaller node costs less to produce since it's physically smaller. The added benefit is it draws less power which allows a redesign of the console beyond simply having a smaller main board. The cooling and layout can change. As an aside, I would expect a revised standard PS4 to eventually hit taking advantage of this. It's in Sony's best interest.


In the case of the PS4K though (and hats off to Jeff Rigsby prognosticating this years ago), Sony is instead taking advantage of the smaller node size in the opposite direction. Instead of shrinking the original and lowering costs, why not make an APU that's roughly the same size, offers similar thermal / power characteristics, and similar manufacturing costs to the original PS4? With the smaller node that would mean you have a much higher transistor budget than the original. In other words a more powerful APU than the original.


The above general concept works whether you are talking about strictly increasing the number of cores and clock speeds, or going with a dual-GPU design. It's all about the number of transistors you can fit into a given die size.
 
Ignoring the BS being thrown around, Panajev gave a pretty clean explanation of what it actually is:



That said, modern game engines are designed so that the pipeline shouldn't have an issue with a change in rendering resolution, thus it seems it would be an unnecessary cost to create the frame in this manner.



Xtensa is an accelerator for a lot of general multimedia applications, of course one is going to be included in the next iterations of the consoles as they have to handle all different quality levels of media that will need it anyway. That said if the system is already outputting native 1080p or 4k there is no point in wasting the the extra .5 watt or so to upscale that when there is a better processor for that task in the TV.

For all it matters that portion of Xtensa might as well shut off under the circumstances we were discussing. Not saying it's irrelevant, just that it wasn't pertinent to the discussion.

Edit: BTW, I appreciate the extensive write up as a fellow EE with a lot of years to catch up on.
if I'm understand what you're trying to say...the way YOU are describing uprendering is literally just a buzzword for something thats been going on in PC gaming since, well..forever...its literally just rendering the game at a higher resolution...that's just rendering to me lol...

That is where the disconnect is, if I'm understanding your stance...applying it to the PS4k is completely irrelevant as it won't have the horsepower to do this...

That Sony patent is also seemingly something very different, using the 4 previous 1080p frames to create a new 4k frame...




Oh really?
yeah, really...you've been talking about so many different things, that you don't even come close to comprehending, that you don't even know what you're arguing anymore...

That Sony patent you keep throwing around isn't doing anything like the Gran Turismo process...in GT each PS3 is rendering a corner of the screen, and then they are just displayed together...it's not taking 4 successive frames and trying to create a new one at a higher resolution...
 
if I'm understand what you're trying to say...the way YOU are describing uprendering is literally just a buzzword for something thats been going on in PC gaming since, well..forever...its literally just rendering the game at a higher resolution...that's just rendering to me lol...

That is where the disconnect is, if I'm understanding your stance...applying it to the PS4k is completely irrelevant as it won't have the horsepower to do this...

That Sony patent is also seemingly something very different, using the 4 1080p frames to create a new 4k frame...




yeah, really...you've been talking about so many different things you don't even know what you're arguing anymore...

That Sony patent you keep throwing around isn't doing anything like the Gran Turismo process...in GT each PS3 is rendering a corner of the screen, and then they are just displayed together...it's not taking 4 successive frames and trying to create a new one at a higher resolution...

I know what I was talking about & I have explained a few times that the patent isn't the only way to up rendering.
 
I know what I was talking about & I have explained a few times that the patent isn't the only way to up rendering.

That Sony patent is upscaling, pure and simple...they can call it whatever they want...but it's an upscaled image that only contains 1920x1080 worth of legit pixels...the rest are created through the upscaling process
 
So what this really means is this is in fact the ps5 with backwards compatibility with Ps4 GAMES and being released barely 3 years after ... Sony FAIL

Sticking with pc I think, only game I played on Ps4 was bloodborne...
 
I'm not sure if this has been discussed (it probably has), mainly because I've been avoiding this thread for the most part, but in theory couldn't a more powerful PS4 make it easier to emulate PS3 games? And if Sony decided to work on that as a feature of the PS4k, would that make it any more enticing to any of you?

Or would it just piss some of you off even more?
 
No it's not it's uprendering

I like the general idea of an upgraded PS4, I even think it's possible they want to go with an added GPU, but the guy's right.

Think of it as a piece of sheet, say 1x1 m in surface area. You can stretch it and skew it any way you want, but it simple won't cover an area of 2x2 meters without it being torn, without holes being created in it.

Edit: Holes definitely lower the sheet's quality.
 
So what this really means is this is in fact the ps5 with backwards compatibility with Ps4 GAMES and being released barely 3 years after ... Sony FAIL

Sticking with pc I think, only game I played on Ps4 was bloodborne...

Not even CLOSE...this thing will be nowhere near the performance upgrade you would expect from a generational change..

We're talking about performance increases to maybe push indie titles to native 4k...smooth out performance on dodgy AAA titles, or maybe push a few more/higher quality effects work in AAA titles with already solid performance...

We are not talking about anything even resembling a generational leap if the rumors are to be believed


I'm not sure if this has been discussed (it probably has), mainly because I've been avoiding this thread for the most part, but in theory couldn't a more powerful PS4 make it easier to emulate PS3 games? And if Sony decided to work on that as a feature of the PS4k, would that make it any more enticing to any of you?

Or would it just piss some of you off even more?

Easier? Yes...but I'm not sure we're talking about enough horsepower...probably not even close
 
Not even CLOSE...this thing will be nowhere near the performance upgrade you would expect from a generational change..

We're talking about performance increases to maybe push indie titles to native 4k...smooth out performance on dodgy AAA titles, or maybe push a few more/higher quality effects work in AAA titles with already solid performance...

We are not talking about anything even resembling a generational leap if the rumors are to be believed




Easier? Yes...but I'm not sure we're talking about enough horsepower...probably not even close
I don't know if we will see a leap like that again in the console space. Jumps like that are just too costly and very risky. Just look how much ps3 cost sony.
 
That Sony patent is upscaling, pure and simple...they can call it whatever they want...but it's an upscaled image that only contains 1920x1080 worth of legit pixels...the rest are created through the upscaling process

You are experiencing a disconnect in topics because of how onQ is expressing them. His suggestions on hardware show he does't know the difference between an processor for 2D image scaling and a GPU which handles all the 3D matix computations.

As you noted in response to my post earlier up-rendering is just a different and more expensive way to render a resolution. The comptational demand is more akin to rendering natively than upscaling.

They are not using the previously displayed frames to upscale, but rendering the four corners of each pixel of the lower resolution image in separate passes, then displaying only the four merged frames as the higher resolution image.
 
I don't know if we will see a leap like that again in the console space. Jumps like that are just too costly and very risky. Just look how much ps3 cost sony.

Well if the move to the "iPhone" model, then you're right...we won't see leaps that big again...but 2x GPU power is not enough for a PS5...not even close...

The PS3 cost Sony for a few reasons...

- it was actually an absolute STEAL at its launch price, when you consider what stand alone BD players were selling for at the time, but they def overestimated how much people were willing to spend

- they insisted on exotic hardware which made the console overly complicated to develop for and third party games suffered greatly for it

- they were cocky as shit and thought name alone would sell consoles at any price

You are experiencing a disconnect in topics because of how onQ is expressing them. His suggestions on hardware show he does't know the difference between an processor for 2D image scaling and a GPU which handles all the 3D matix computations.

As you noted in response to my post earlier up-rendering is just a different and more expensive way to render a resolution. The comptational demand is more akin to rendering natively than upscaling.

They are not using the previously displayed frames to upscale, but rendering the four corners of each pixel of the lower resolution image in separate passes, then displaying only the four merged frames as the higher resolution image.

In that scenario, why even bother doing it, if it's more expensive computationally?
 
You are experiencing a disconnect in topics because of how onQ is expressing them. His suggestions on hardware show he does't know the difference between an processor for 2D image scaling and a GPU which handles all the 3D matix computations.

As you noted in response to my post earlier up-rendering is just a different and more expensive way to render a resolution. The comptational demand is more akin to rendering natively than upscaling.

They are not using the previously displayed frames to upscale, but rendering the four corners of each pixel of the lower resolution image in separate passes, then displaying only the four merged frames as the higher resolution image.

No he has a disconnect because he has it made up in his head that it's up-scaling so he is going to twist it to being up-scaling no matter what. And I know exactly what a GPU is.
 
I'm sure this has been talked about already, so forgive me. But I just realized what the PS4K could mean for VR. Wouldn't it ostensibly bring it up to par in terms of graphical prowess with the Oculus and Rift?


So Sony releases the PSVR this year for $400, then next year they release the PS4K that brings the power up to par with Oculus, and they've essentially stolen the market. It's cheaper, but still the same. Whereas without the PS4K, the PSVR would be cheaper and not as capable.

This is interesting if my thinking is correct.
 
I don't think what Sony is doing is bad at all. They will still sell millions of units the ps4 has already made a big deal in gaming this just improves games for those who care about that sort of thing. Sure if you have a gaming pc this still doesn't really matter because you have already spent thousands on your computer. Those who don't even have a ps4 they have a good choice to select now. For those who already own a ps4 can trade theirs in for cheap of course and upgrade if it matters to them. For me id prob just buy an extra ps4 next year but they will eventually get my money thats for sure.
 
In that scenario, why even bother doing it, if it's more expensive computationally?

It was put in use for the PS4's PS2 emulation because it would jump around some of the PS2's more exotic post processing that would likely cause rendering errors in the pipeline or final output if the buffer size was simply increased. It likely has no use with modern hardware and game engines.
 
In that scenario, why even bother doing it, if it's more expensive computationally?
I'd imagine it only makes sense for specific situations.

For example, let's say you have a working emulator. Modifying it to render at a higher native resolution throughout the entire pipeline could be quite the non-trivial task. An easier development strategy in terms of time, cost, and compatibility could be the above uprendering method.

It's computationally expensive, but it's a means to an end for getting 4x resolution out of what's essentially an already working black box emulator.
 
It was put in use for the PS4's PS2 emulation because it would jump around some of the PS2's more exotic post processing that would likely cause rendering errors in the pipeline or final output if the buffer size was simply increased. It likely has no use with modern hardware and game engines.

I'd imagine it only makes sense for specific situations.

For example, let's say you have a working emulator. Modifying it to render at a higher native resolution throughout the entire pipeline could be quite the non-trivial task. An easier development strategy in terms of time, cost, and compatibility could be the above uprendering method.

It's computationally expensive, but it's a means to an end for getting 4x resolution out of what's essentially an already working black box emulator.

Right, does make sense in that case
 
No he has a disconnect because he has it made up in his head that it's up-scaling so he is going to twist it to being up-scaling no matter what. And I know exactly what a GPU is.

The problem is you have referenced a GPU that is more than an order of magnitude under powered and the Xtensa processor, which is an accelerator, as chips you expect can render 3/4 of a 4k image because you somehow think rendering a game at 4k or the equally difficult "uprendering" at 4k is possible on those chips when in reality they'd be lucky to handle a few cubes on screen at that resolution, let alone an entire modern rendering pipeline.

You think you're on base, but your explanations are in left field.
 
Can anyone explain me why Sony can't release a simple more powerful sku with the similar architecture but with the double of gpu/cpu raw power without the need to emulate the ps4 multiplat? Because I don't understand.
 
Can anyone explain me why Sony can't release a simple more powerful sku with the similar architecture but with the double of gpu/cpu raw power without the need to emulate the ps4 multiplat? Because I don't understand.

They can...some people here in this thread are just bat shit crazy
 
Not even CLOSE...this thing will be nowhere near the performance upgrade you would expect from a generational change..

We're talking about performance increases to maybe push indie titles to native 4k...smooth out performance on dodgy AAA titles, or maybe push a few more/higher quality effects work in AAA titles with already solid performance...

We are not talking about anything even resembling a generational leap if the rumors are to be believed




Easier? Yes...but I'm not sure we're talking about enough horsepower...probably not even close

Fine, if that's the case do not release anything. This is not going to end well for Sony.

Don't even think they will manage 4k, to be honest, when even beefe pc struggle!
 
The problem is you have referenced a GPU that is more than an order of magnitude under powered and the Xtensa processor, which is an accelerator, as chips you expect can render 3/4 of a 4k image because you somehow think rendering a game at 4k or the equally difficult "uprendering" at 4k is possible on those chips when in reality they'd be lucky to handle a few cubes on screen at that resolution, let alone an entire modern rendering pipeline.

You think you're on base, but your explanations are in left field.


The Xtensa processor is config-able & it can be made to be used as a GPU for STBs & consoles if that's what someone wanted to use it for. And me explaining that it could be a smaller GPU does not = me not knowing what a GPU is. it means I'm smart enough to know that Sony isn't going to have a power hungry 6TF GPU in a PS4.5. whatever it will be it's going to be a smart lower power solution.
 
Fine, if that's the case do not release anything. This is not going to end well for Sony.

Don't even think they will manage 4k, to be honest, when even beefe pc struggle!

They won't, some people have lost their minds with the 4k idea...

This console will not be able to render AAA games in 4k...just won't happen...will it upscale to 4k? Yes, I think without a doubt...it will include UHD media capabilities and everything that comes with that...

They only games you'll get in native 4k are smaller indie titles..that's it...

AAA games will see slight improvements in effects or performance over the OG PS4...

As for it ending well for Sony...its going to be fine...it makes sense for Sony...the PS4 will still sell boat loads, and still be the main console...those that want to spend the extra for a little more capable box will...
 
Fine, if that's the case do not release anything. This is not going to end well for Sony.

Don't even think they will manage 4k, to be honest, when even beefe pc struggle!
No one (sane) is claiming native 4k rendering. As to why to release in general, there are a few reasons.


1) There are the realities of current processor timelines. Moore's law is proving to not be a law, particularly within the confines of producing a console that has size and thermal limitations. CPU's and GPU's have been seeing slower incremental upgrades. So if a console manufacturer wants to show the huge sorts of performance increases generations have typically demonstrated, the length between generations needs to increase. The problem with that of course is that PC's and phones do not follow the same generational paradigm. So while their improvements may be incremental, you still reach a point in the middle of a console generation (if not earlier) where there is a pretty sizable gulf between what they can do and what the current console generation can do.



2) The console market is pretty risky. We are past the days where a console manufacturer is willing to swallow huge losses at the beginning of a generation with the hopes of recouping it tied to its platform being successful. With that in mind they are limiting performance at the start of the gen even more, making the above eventual gulf happen even quicker.

Since both of those are tied to the existing PC / phone paradigm, the question becomes why not join them? Obviously it needs to be done in a very deliberate and systematic way. You don't want to just fully open it up or you lose the point of consoles. But if things stay the way they have in the past, they are obviously going to see long-term issues since they aren't being agile to the realities of the changing processor tech landscape.



3) That is the general situation that basically all of the console manufacturers are facing moving forward, and many would argue that in itself is enough to justify at least trying to change paradigms. In this specific case though, there is also the justification of what's happening with display tech and over-the-top services. We are in the midst of big change in content and display with UHD. Resolution isn't really the long-pole here, it's HDR, 10+ bit, and wider color gamuts that is making this a sea change. When we moved from SD to HD, all that really changed was the resolution. Everything was still targeting Rec 709. The reason we saw improvements in color and the like wasn't because the standard had changed, it was because display technology and content distribution finally had caught up to what the existing standard could do. Now we're moving past that. The unfortunate reality of all of this is that DRM also comes with these new standards. You want to watch UHD content from VUDU, Netflix, and Amazon? You need HDCP 2.2.

Since last gen, consoles have been valued as being general media players by many people. So when you consider the move to UHD displays and content, we are at a point where it behooves the console manufacturers to have support for 4K media ... and they need it well before a true next generation will hit. So it comes down to a revision of some sort being necessary no matter what. They want 4K media playback with all the bells and whistles (and in the case of Sony at least, that includes Ultra HD Blu Ray too), as well as game output that plays nicely with the TVs that are going to be available.



Basically since a redesign is a given because of #3 anyway, why not also throw in some more processing because of #1 and #2 and see how it goes? If ever there was a time to test out a paradigm shift, the current technology landscape is arguably the most logical.
 
The Xtensa processor is config-able & it can be made to be used as a GPU for STBs & consoles if that's what someone wanted to use it for. And me explaining that it could be a smaller GPU does not = me not knowing what a GPU is. it means I'm smart enough to know that Sony isn't going to have a power hungry 6TF GPU in a PS4.5. whatever it will be it's going to be a smart lower power solution.

There is no possible configuration of Xtensa that could come remotely close to what you are suggesting.

Look at their webpage: http://ip.cadence.com/applications/applications-overview

"Cadence standards-based IP can be easily configured and customized to speed your design cycle. Our Tensilica® DPUs can be optimized in many ways for both performance-intensive DSP (audio, video, imaging, and baseband signal processing) and embedded RISC processing functions (security, networking, and deeply embedded control)."

They don't even suggest graphics because that is not what they were designed for. They have roughly 30 whitepapers and none of them discuss graphics rendering:

http://ip.cadence.com/knowledgecenter/resources/know-dip-wp

You can't get 4k native rendering out of a system without the GPU power to do it. Period.
 
Kaveri uses the same generation Xtensa processors for UVD that the XB1 and PS4 use for their vision processing and codecs. From the section on UVD in Kaveri:Regardless of a semantic or any understanding or mis-understanding of rendering or up-scaling, it is accomplished with a Xtensa accelerator.

Enough already all of you.

Gesh, I try to educate you guys on what's in the game consoles and NO one listens. This will be implemented by both consoles and is a reason that AMD, XB1 and PS4 are all using the same family of Accelerator.

AMD themselves have said that the PS4 doesn't use the same DSP blocks.
Besides there is zero evidence that there's anything related to that in the southbridge. It's 100% a Marvell chip.
 
No one (sane) is claiming native 4k rendering. As to why to release in general, there are a few reasons.


1) There are the realities of current processor timelines. Moore's law is proving to not be a law, particularly within the confines of producing a console that has size and thermal limitations. CPU's and GPU's have been seeing slower incremental upgrades. So if a console manufacturer wants to show the huge sorts of performance increases generations have typically demonstrated, the length between generations needs to increase. The problem with that of course is that PC's and phones do not follow the same generational paradigm. So while their improvements may be incremental, you still reach a point in the middle of a console generation (if not earlier) where there is a pretty sizable gulf between what they can do and what the current console generation can do.



2) The console market is pretty risky. We are past the days where a console manufacturer is willing to swallow huge losses at the beginning of a generation with the hopes of recouping it tied to it's platform being successful. With that in mind they are limiting performance at the start of the gen even more, making the above eventual gulf happen even quicker.

Since both of those are tied to the existing PC / phone paradigm, the question becomes why not join them? Obviously it needs to be done in a very deliberate and systematic way. You don't want to just fully open it up or you lose the point of consoles. But if things stay the way they have in the past, they are obviously going to see long-term issues since they aren't being agile to the realities of the changing processor tech landscape.



3) That is the general situation that basically all of the console manufacturers are facing moving forward, and many would argue that in itself is enough to justify at least trying to change paradigms. In this specific case though, there is also the justification of what's happening with display tech and over-the-top services. We are in the midst of big change in content and display with UHD. Resolution isn't really the long-pole here, it's HDR, 10+ bit, and wider color gamuts that is making this a sea change. When we moved from SD to HD, all that really changed was the resolution. Everything was still targeting Rec 709. The reason we saw improvements in color and the like wasn't because the standard had changed, it was because display technology and content distribution finally had caught up to what the existing standard could do. Now we're moving past that. The unfortunate reality of all of this is that DRM also comes with these new standards. You want to watch UHD content from VUDU, Netflix, and Amazon? You need HDCP 2.2.

Since last gen, consoles have been valued as being general media players by many people. So when you consider the above, we are at a point where it behooves the console manufacturers to have support for all of the above ... and they need it well before a true next generation will hit. So it comes down to a revision of some sort being necessary no matter what. They want 4K media playback with all the bells and whistles (and in the case of Sony at least, that includes Ultra HD Blu Ray too), as well as game output that plays nicely with the TVs that are going to be available.



Basically since a redesign is a given because of #3 anyway, why not also throw in some more processing because of #1 and #2 and see how it goes? If ever there was a time to test out a paradigm shift, the current technology landscape is arguably the most logical.

Sorry to quote it all again, but this is a fantastic post. Kinda all makes sense when presented like this.
 
It's not because you say "simple" that it becomes simple. GCN architecture is pretty brillant with scalibility. It's easier to have twice more compute units and disable them for compatibility purpose than the opposite. Even though it would be stupid to disable anything. You developp for an OS, an API. This makes things easier and run flawlessly on next hardwares.

As customizing for two profiles, this is non sense. You developp for the lowest common denominator and you let the hardware muscle of the better hardware doing the rest.
Know what is nonsense?
Thinking that you'll increase power arbitrarily on a platform that was not originally advertised as such to devs (or we would have heard about it), and expect all games to run fine without being patched (particularly games with unlocked framerates, and honestly we shouldn't expect the vast majority of games to be patched). Maybe I'll eat some crow when the reveal comes, but man do I expect people like you to have a hissy fit.

Edit: btw, with Zoetis (who is an insider afaik) saying "sli", you realize that the most logical explanation is a doubling of the current GPU(or a smaller more efficient version of the current) which makes the most sense instead of going for a single more powerful one, yes?

Anyways, we'll see soon enough, perhaps at E3.
 
Top Bottom