Aztechnology
Member
No I mean how is Sony going to fit and budget and additional GPU in an APU form. Would it just be a larger board with another processing unit identical to the first running in parallel?
YLODv2 would not be a good idea... there's a reason they went the APU route.No I mean how is Sony going to fit and budget and additional GPU in an APU form. Would it just be a larger board with another processing unit identical to the first running in parallel?
No I mean how is Sony going to fit and budget and additional GPU in an APU form. Would it just be a larger board with another processing unit identical to the first running in parallel?
If they are going from 28nm to 14nm & removing redundant parts from the PS4 hardware they should be able to fit a second chip. it might even be stacked like the Vita chip.
What kind of "redundant" parts?If they are going from 28nm to 14nm & removing redundant parts from the PS4 hardware they should be able to fit a second chip. it might even be stacked like the Vita chip.
Simple really, it maximizes compatibility with existing PS4 games while being a non factor for devs who do not wish to customize their game to 2 different hardware profiles.The second GPU thing makes no sense except for fan wet dreams.
There's something you don't understand, the multi-GPU VR thing is here for people owning multiple GPU because there's no chip on the market fast enough for their need.
On the PS4 case ? You can bet PS4K won't even be on par with a single R9 290. What's the point in getting 2 times of the same GPU when you could fit a bigger 32CU part at a higher clockspeed ?
You can throw PR slides, patents and such. It doesn't translate into anything for real performances. And the fact is, a two times bigger GPU will always be better and more reliable than two smaller chips.
Smple really, it maximizes compatibility with existing PS4 games while being a non factor for devs who do not wish to customize their game to 2 different hardware profiles.
Even if they manage to fit it why would they? If I recall correctly Valve's latest benchmark have shown that rendering left and right on seperate GPUs only results in a 35% vs rendering both sides on one GPU.No I mean how is Sony going to fit and budget and additional GPU in an APU form. Would it just be a larger board with another processing unit identical to the first running in parallel?
This.Even if they manage to fit it why would they? If I recall correctly Valve's latest benchmark have shown that rendering left and right on seperate GPUs only results in a 35% vs rendering both sides on one GPU.
There's still a lot of redundant work that both GPUs need to perform in order to generate the final frame that can't be magically taken out of the equation.
Dual GPUs in consoles doesn't make any sense. Two GPUs would have a lot of redundant parts, the memory pool(three processors accessing the same memory pool at once) would be a nightmare, etc. But it doesn't surprise me to see who came with the idea.
Smple really, it maximizes compatibility with existing PS4 games while being a non factor for devs who do not wish to customize their game to 2 different hardware profiles.
I'd say the ones with a "wet dream" here are the ones thinking that the PS4K will be a big break in power, and adopt a single much stronger GPU. In fact, if the PS4K go as far as adopting Polaris I'd be rather astonished.
The sooner people realize the PS4K is probably meant strictly for the 4K TV users and might provide little to no benefits to 1080p users aside from a few optimized games, the better to avoid being disappointed (I can already see the threads around E3: "WTF Sony, what's the point in releasing a PS4K if I can't play all my PS4 games in 1080p60!!!")
Upscaling is already incorporated into most 4k tvs. It would be pointless to have the hardware do what the tv already can.
Actually it being 'pointless' is really not true at all. There are reasons for a developer and system manufacturer to want the scaling done internally. Particularly when taking them in total. Internal scaling offers the following:Xtensa is an accelerator for a lot of general multimedia applications, of course one is going to be included in the next iterations of the consoles as they have to handle all different quality levels of media that will need it anyway. That said if the system is already outputting native 1080p or 4k there is no point in wasting the the extra .5 watt or so to upscale that when there is a better processor for that task in the TV.
PS4 is at 40 million in 2 years plus , Sony don't give a damn about a NX or Xbox right now this is part of a bigger deal that includes 4K TV streaming , 4K Movie streaming , 4K Blu-rays , 4K TVs & so on.
Jeff is telling you about the same thing I told people about before but no one listens you can even see that he credit me in some of the post.
Possibly but TVs have power restrictions that the Game Console or Computer don't have so they (game consoles and Computers) can do more processing. This is why Kaveri does it rather than leaving it to the TV. Edit: Kaveri has a HDMI port in case you tried to argue that Kaveri is usually outputting to a monitor with no video processing. 4K TVs are the best monitor.
AMD has stated that the same hardware used for HEVC codecs in the XB1 is used in AMD UVD and that is the Xtensa accelerator. It can do much much more and AMD has had them in APUs since 2010 along with the Trustzone processor as Xtensa processors/DPUs need a AXI ARM bus. ALL AMD APUs can support gesture recognition using the Xtensa accelerator. Starting with Kaveri (UVD 4.2) it can support HEVC and Carrizo (UVD 6) can support HEVC with a duty cycle allowing it to turn off part of the time while processing a HEVC video stream.
I agree with you, modern engines are quite likely not to make the same usage of their video memory as the PS2 ones made of GS's eDRAM so this technique is likely over engineered for that. In terms of the scaler chip being built in or not/enabled or not for 1080p to 4K scaling, it is true that Sony has great TV's to sell, but if I cared a lot about my device video quality (maybe part of my unique selling points) I would prefer to do the scaling on chip and send a reliably great signal to the TV instead of trusting an unknown chip on the Unknown TV.
As a ECE graduate, I sympathise with lovers of both EE and CS fields. Researched posts are appreciated and the intention is good.
Actually it being 'pointless' is really not true at all. There are reasons for a developer and system manufacturer to want the scaling done internally. Particularly when taking them in total. Internal scaling offers the following:
Stop. Don't even begin to think you are redeemed by Jeffs post. He is not agreeing with you just because he referenced that slide while talking specifically about upscaling.
First off, your posts and constant quoting of patents and presentation slides are nowhere near the quality of Jeffs deliberate, researched, and and thoroughly explained posts.
Second, the Xtensa processor Jeff is talking about is intended to accelerate multi-media applications such as voice recognition, image processing (like upscaling), and audio processing.
What little GPU processing the Xtensa might be capable of is likely delegated to the PS4s OS and themes which is a separate environment. Again, you are more than an order of magnitude off (closer to 2 this time).
Entirely agreed, but many 4kTVs have quad, hexa, and some even octa core processors entirely dedicated to image quality improvements along with other image processing chipsets that blow Xtensa out of the water. In general the next revision of Xtensa that would go into Sony's next system would likely still be dedicated to the OS.
Xtensa has a small portion dedicated to image processing, and itself is a very small portion of the APU. It simply doesn't have the processing power to handle the triilions of floating point operations per second required by the matrix operations used for rasterizing a modern game at 4k as onQ is suggesting towards.
Good point, but if a TVs post processing is going to ruin the image, it's still going to ruin the image unless the user takes the time to turn off some of those settings. The fact we are talking about a potential high end game console further exacerbates the likely hood the user would have a quality 4kTV. The chip is still going to be included regardless to maintain compatibility with the original PS4.
I think this whole shrunk APU + new GPU (or another APU?) solution is very unlikely, but I take more issue that people are pointing to an accelerator as a potential source of rendering capabilities for modern games. There is a huge discontinuity in knowledge on the subject of rasterization for people who can delve so deeply into speculation on the chipset used to upscale the image immediately afterward.
The assumed reason a supposed PS4K is even viable is that AMD will have the capability of producing its new APU design using a smaller node size.No I mean how is Sony going to fit and budget and additional GPU in an APU form. Would it just be a larger board with another processing unit identical to the first running in parallel?
if I'm understand what you're trying to say...the way YOU are describing uprendering is literally just a buzzword for something thats been going on in PC gaming since, well..forever...its literally just rendering the game at a higher resolution...that's just rendering to me lol...Ignoring the BS being thrown around, Panajev gave a pretty clean explanation of what it actually is:
That said, modern game engines are designed so that the pipeline shouldn't have an issue with a change in rendering resolution, thus it seems it would be an unnecessary cost to create the frame in this manner.
Xtensa is an accelerator for a lot of general multimedia applications, of course one is going to be included in the next iterations of the consoles as they have to handle all different quality levels of media that will need it anyway. That said if the system is already outputting native 1080p or 4k there is no point in wasting the the extra .5 watt or so to upscale that when there is a better processor for that task in the TV.
For all it matters that portion of Xtensa might as well shut off under the circumstances we were discussing. Not saying it's irrelevant, just that it wasn't pertinent to the discussion.
Edit: BTW, I appreciate the extensive write up as a fellow EE with a lot of years to catch up on.
yeah, really...you've been talking about so many different things, that you don't even come close to comprehending, that you don't even know what you're arguing anymore...Oh really?
if I'm understand what you're trying to say...the way YOU are describing uprendering is literally just a buzzword for something thats been going on in PC gaming since, well..forever...its literally just rendering the game at a higher resolution...that's just rendering to me lol...
That is where the disconnect is, if I'm understanding your stance...applying it to the PS4k is completely irrelevant as it won't have the horsepower to do this...
That Sony patent is also seemingly something very different, using the 4 1080p frames to create a new 4k frame...
yeah, really...you've been talking about so many different things you don't even know what you're arguing anymore...
That Sony patent you keep throwing around isn't doing anything like the Gran Turismo process...in GT each PS3 is rendering a corner of the screen, and then they are just displayed together...it's not taking 4 successive frames and trying to create a new one at a higher resolution...
I know what I was talking about & I have explained a few times that the patent isn't the only way to up rendering.
That Sony patent is upscaling, pure and simple...they can call it whatever they want...but it's an upscaled image that only contains 1920x1080 worth of legit pixels...the rest are created through the upscaling process
No it's not it's uprendering
No it's not it's uprendering
No it's not it's uprendering
So what this really means is this is in fact the ps5 with backwards compatibility with Ps4 GAMES and being released barely 3 years after ... Sony FAIL
Sticking with pc I think, only game I played on Ps4 was bloodborne...
I'm not sure if this has been discussed (it probably has), mainly because I've been avoiding this thread for the most part, but in theory couldn't a more powerful PS4 make it easier to emulate PS3 games? And if Sony decided to work on that as a feature of the PS4k, would that make it any more enticing to any of you?
Or would it just piss some of you off even more?
I don't know if we will see a leap like that again in the console space. Jumps like that are just too costly and very risky. Just look how much ps3 cost sony.Not even CLOSE...this thing will be nowhere near the performance upgrade you would expect from a generational change..
We're talking about performance increases to maybe push indie titles to native 4k...smooth out performance on dodgy AAA titles, or maybe push a few more/higher quality effects work in AAA titles with already solid performance...
We are not talking about anything even resembling a generational leap if the rumors are to be believed
Easier? Yes...but I'm not sure we're talking about enough horsepower...probably not even close
No it's not it's uprendering
That Sony patent is upscaling, pure and simple...they can call it whatever they want...but it's an upscaled image that only contains 1920x1080 worth of legit pixels...the rest are created through the upscaling process
I don't know if we will see a leap like that again in the console space. Jumps like that are just too costly and very risky. Just look how much ps3 cost sony.
You are experiencing a disconnect in topics because of how onQ is expressing them. His suggestions on hardware show he does't know the difference between an processor for 2D image scaling and a GPU which handles all the 3D matix computations.
As you noted in response to my post earlier up-rendering is just a different and more expensive way to render a resolution. The comptational demand is more akin to rendering natively than upscaling.
They are not using the previously displayed frames to upscale, but rendering the four corners of each pixel of the lower resolution image in separate passes, then displaying only the four merged frames as the higher resolution image.
You are experiencing a disconnect in topics because of how onQ is expressing them. His suggestions on hardware show he does't know the difference between an processor for 2D image scaling and a GPU which handles all the 3D matix computations.
As you noted in response to my post earlier up-rendering is just a different and more expensive way to render a resolution. The comptational demand is more akin to rendering natively than upscaling.
They are not using the previously displayed frames to upscale, but rendering the four corners of each pixel of the lower resolution image in separate passes, then displaying only the four merged frames as the higher resolution image.
In that scenario, why even bother doing it, if it's more expensive computationally?
I'd imagine it only makes sense for specific situations.In that scenario, why even bother doing it, if it's more expensive computationally?
It was put in use for the PS4's PS2 emulation because it would jump around some of the PS2's more exotic post processing that would likely cause rendering errors in the pipeline or final output if the buffer size was simply increased. It likely has no use with modern hardware and game engines.
I'd imagine it only makes sense for specific situations.
For example, let's say you have a working emulator. Modifying it to render at a higher native resolution throughout the entire pipeline could be quite the non-trivial task. An easier development strategy in terms of time, cost, and compatibility could be the above uprendering method.
It's computationally expensive, but it's a means to an end for getting 4x resolution out of what's essentially an already working black box emulator.
No he has a disconnect because he has it made up in his head that it's up-scaling so he is going to twist it to being up-scaling no matter what. And I know exactly what a GPU is.
Can anyone explain me why Sony can't release a simple more powerful sku with the similar architecture but with the double of gpu/cpu raw power without the need to emulate the ps4 multiplat? Because I don't understand.
Not even CLOSE...this thing will be nowhere near the performance upgrade you would expect from a generational change..
We're talking about performance increases to maybe push indie titles to native 4k...smooth out performance on dodgy AAA titles, or maybe push a few more/higher quality effects work in AAA titles with already solid performance...
We are not talking about anything even resembling a generational leap if the rumors are to be believed
Easier? Yes...but I'm not sure we're talking about enough horsepower...probably not even close
The problem is you have referenced a GPU that is more than an order of magnitude under powered and the Xtensa processor, which is an accelerator, as chips you expect can render 3/4 of a 4k image because you somehow think rendering a game at 4k or the equally difficult "uprendering" at 4k is possible on those chips when in reality they'd be lucky to handle a few cubes on screen at that resolution, let alone an entire modern rendering pipeline.
You think you're on base, but your explanations are in left field.
Fine, if that's the case do not release anything. This is not going to end well for Sony.
Don't even think they will manage 4k, to be honest, when even beefe pc struggle!
No one (sane) is claiming native 4k rendering. As to why to release in general, there are a few reasons.Fine, if that's the case do not release anything. This is not going to end well for Sony.
Don't even think they will manage 4k, to be honest, when even beefe pc struggle!
The Xtensa processor is config-able & it can be made to be used as a GPU for STBs & consoles if that's what someone wanted to use it for. And me explaining that it could be a smaller GPU does not = me not knowing what a GPU is. it means I'm smart enough to know that Sony isn't going to have a power hungry 6TF GPU in a PS4.5. whatever it will be it's going to be a smart lower power solution.
Kaveri uses the same generation Xtensa processors for UVD that the XB1 and PS4 use for their vision processing and codecs. From the section on UVD in Kaveri:Regardless of a semantic or any understanding or mis-understanding of rendering or up-scaling, it is accomplished with a Xtensa accelerator.
Enough already all of you.
Gesh, I try to educate you guys on what's in the game consoles and NO one listens. This will be implemented by both consoles and is a reason that AMD, XB1 and PS4 are all using the same family of Accelerator.
Not really... While Jeff could be an old school Jedi, onQ is more like Jar Jar Binks...
No one (sane) is claiming native 4k rendering. As to why to release in general, there are a few reasons.
1) There are the realities of current processor timelines. Moore's law is proving to not be a law, particularly within the confines of producing a console that has size and thermal limitations. CPU's and GPU's have been seeing slower incremental upgrades. So if a console manufacturer wants to show the huge sorts of performance increases generations have typically demonstrated, the length between generations needs to increase. The problem with that of course is that PC's and phones do not follow the same generational paradigm. So while their improvements may be incremental, you still reach a point in the middle of a console generation (if not earlier) where there is a pretty sizable gulf between what they can do and what the current console generation can do.
2) The console market is pretty risky. We are past the days where a console manufacturer is willing to swallow huge losses at the beginning of a generation with the hopes of recouping it tied to it's platform being successful. With that in mind they are limiting performance at the start of the gen even more, making the above eventual gulf happen even quicker.
Since both of those are tied to the existing PC / phone paradigm, the question becomes why not join them? Obviously it needs to be done in a very deliberate and systematic way. You don't want to just fully open it up or you lose the point of consoles. But if things stay the way they have in the past, they are obviously going to see long-term issues since they aren't being agile to the realities of the changing processor tech landscape.
3) That is the general situation that basically all of the console manufacturers are facing moving forward, and many would argue that in itself is enough to justify at least trying to change paradigms. In this specific case though, there is also the justification of what's happening with display tech and over-the-top services. We are in the midst of big change in content and display with UHD. Resolution isn't really the long-pole here, it's HDR, 10+ bit, and wider color gamuts that is making this a sea change. When we moved from SD to HD, all that really changed was the resolution. Everything was still targeting Rec 709. The reason we saw improvements in color and the like wasn't because the standard had changed, it was because display technology and content distribution finally had caught up to what the existing standard could do. Now we're moving past that. The unfortunate reality of all of this is that DRM also comes with these new standards. You want to watch UHD content from VUDU, Netflix, and Amazon? You need HDCP 2.2.
Since last gen, consoles have been valued as being general media players by many people. So when you consider the above, we are at a point where it behooves the console manufacturers to have support for all of the above ... and they need it well before a true next generation will hit. So it comes down to a revision of some sort being necessary no matter what. They want 4K media playback with all the bells and whistles (and in the case of Sony at least, that includes Ultra HD Blu Ray too), as well as game output that plays nicely with the TVs that are going to be available.
Basically since a redesign is a given because of #3 anyway, why not also throw in some more processing because of #1 and #2 and see how it goes? If ever there was a time to test out a paradigm shift, the current technology landscape is arguably the most logical.
Know what is nonsense?It's not because you say "simple" that it becomes simple. GCN architecture is pretty brillant with scalibility. It's easier to have twice more compute units and disable them for compatibility purpose than the opposite. Even though it would be stupid to disable anything. You developp for an OS, an API. This makes things easier and run flawlessly on next hardwares.
As customizing for two profiles, this is non sense. You developp for the lowest common denominator and you let the hardware muscle of the better hardware doing the rest.
Sounds a little racist...This shit is funny, thanks for the laugh homie.