PS4K information (~2x GPU power w/ clock+, new CPU, price, tent. Q1 2017)

- The rumors of the PS4k are talking about ~2x the GPU power of the current console...

- Based on current PC tech, 2x the horsepower is not enough to render modern AAA games in 4k

This is clearly false. You can play RotTR with a 970 in 4k, just not at 60fps.

Prices on 4k TVs are dropping. You can get a sony 55inch for under 1k.
 
This is a dumb idea. I bet like 5% of people have a 4K TV. Just release the PS5 in 2018 if you think this gen should be short.

Roughly 10% of USA owned a 4k TV as of December of last year, it's supposed to be near 20% by end of this year, and 50% by the end of decade.

And that's with slightly higher prices and no real 4k content...
 
If they are able to stuff enough power under the hood, it may be possible to create a general intercept for rendering resolution and simply let frame rates hit their caps for older games and indies that choose to follow the same process. It wouldn't be that difficult. The real question is more centered around how and when PS4 compatible hardware could reach that power in a consumer level box.



That's a gall response considering:

1. The fact that you are suggesting a nonnormative process which is by your own admission inferior to the standard for the current situation is asinine.

2. Your "smaller GPU" idea to "do the uprendering" is a further expression of your incompetence with the subject. There is no possibility of "special hardware for the task of uprendering to 4K". The process literally consists of rendering 4 1080p frames and recombining them. A single GPU with 4 times the power would still be the minimum requirement and the most efficient way to do it.

But smaller GPUs can render 4 1080P frames & the one I posted can render 4k at 60fps.

If the PS4 main GPU is doing the heavy lifting the smaller GPU could render 3 less demanding pixels to use for the up rendering.
 
Lets say dgpu (as per zoetis sli quote) turns out real. My guess would be the main gpu is the exact same as the og ps4 while the other one is turned off until future games start utilizing the added power. It would go in hand with the "with ps4k, og ps4 games will not recieve any boost unless via future patch." It doesn't make sense when a game like infamous (and many others) run at "unlimited" framerates and it goes up and down depending on how much the scene is being stressed. Another is a game like wolfenstein which iirc uses dynamic resolution on ps4 up to 1080p. Why would it not be 1080p all the time on ps4k if it's more than double the power? Maybe it's not as simple as it sounds, but for now i believe the second gpu will be offline until games start utilizing it. However if it is a "pain in the ass to develop for" then just use the og ps4 gpu while leaving the second one off and just take advantage of the improved RAM, new CPU, and higher clock speed of the CPU. It would still offer a good improvement.
Part of the 25X more efficient by 2020 plans by AMD is that more powerful APUs have multiple GPU power islands where GPU blocks can be turned on and off as needed. That and how memory is used are some of the current efficiency savers. dGPUs are not a Game Console Semi-custom feature, it makes no sense just as having expandable memory doesn't make sense; PCs need to have both.

Having an OS that supports multiple power island GPUs or DSPs or FPGA is the problem and issue and GPGPU. OpenVX, Vulcan and HSAIL are answers to supporting new efficiencies be they hardware or design. The Hardware manufacturer provides drivers as APIs that an OS like Windows or Linux can use. The issue is the APIs must be accepted standards (Khronos) and widely used and is what AMD is dealing with and Game Consoles an answer.

Game Developers will resist using an API only offered by Game Consoles until PC hardware is on the market that also offers those APIs. That's all the new dGPUs and APUs offered by AMD.by this October.
 
If they are able to stuff enough power under the hood, it may be possible to create a general intercept for rendering resolution and simply let frame rates hit their caps for older games and indies that choose to follow the same process. It wouldn't be that difficult. The real question is more centered around how and when PS4 compatible hardware could reach that power in a consumer level box.



That's a gall response considering:

1. The fact that you are suggesting a nonnormative process which is by your own admission inferior to the standard for the current situation is asinine.

2. Your "smaller GPU" idea to "do the uprendering" is a further expression of your incompetence with the subject. There is no possibility of "special hardware for the task of uprendering to 4K". The process literally consists of rendering 4 1080p frames and recombining them. A single GPU with 4 times the power would still be the minimum requirement and the most efficient way to do it.
How about an accelerator that would be much more efficient (greater than 5X up to 100X in a DPU configuration) than a GPU at the task in question. I'm also talking up-scaling not up-rendering. What ever system is to be used in the PS4K and I think the Launch PS4 for games will also be used to up and down render media for the UHD digital bridge and must not use much power so a second GPU even on die is less practical. I use the term render because color palette and Dynamic range also need to be handled.

The VPU and ISP blocks in the following slide (credit onQ123) are essentially the same as Xtensa DPU accelerators that are in the PS4 and XB1 and the MCU like another Xtensa accelerator for True Audio. Notice what they support which is mirrored in the Game consoles and understand that the Xtensa accelerators are equally efficient at CODECS, vision and up and down scaling video.

PowerVRGT7900-CPU.png


dpu-new.jpg

It's a Cadence - Tensilica - Xtensa DPU The XB1 is Cadence (ARM IP) and AMD IP and the PS4 Southbridge contains the Xtensa processor/DPU on an ARM AXI bus.
 
But smaller GPUs can render 4 1080P frames & the one I posted can render 4k at 60fps.

If the PS4 main GPU is doing the heavy lifting the smaller GPU could render 3 less demanding pixels to use for the up rendering.

The GPU you posted is more than an order of magnitude under powered for rendering a standard PS4 game at 4k 30fps. It essentially just supports output at the resolution and frame rate like playing a video.

Those other 3 pixels are each equally as demanding as the first. That is why uprendering to 4k is more taxing than natively rendering at 4k. If they were less demanding to create such as applying an algorithm to the original image then it would just be upscaling.

You need to look into how rasterization is done on a GPU to get a sense of just how much processing is actually being done. Try reading through this:

http://www.scratchapixel.com/lesson...plementation/overview-rasterization-algorithm

How about an accelerator that would be much more efficient (greater than 5X) than a GPU at the task in question. I'm also talking up-scaling not up-rendering. What ever system is to be used in the PS4K and I think the Launch PS4 for games will also be used to up and down render media for the UHD digital bridge and must not use much power so a second GPU even on die is less practical. I use the term render because color palette and Dynamic range also need to be handled.

Upscaling is already incorporated into most 4k tvs. It would be pointless to have the hardware do what the tv already can.

The task in question, "up-rendering", is still rendering 8.3 million unique pixels, same as a native 4k frame. If there was something 5 times more efficient than GPUs at that task we wouldn't be using GPUs.

Check interleaved rendering and sampling. You can use as small rendering targets you want to get as high resolution you want.
Completely non-interleaved method would be to render image in small tiles and combine in the end.

That is a 2006 paper on global illumination, which is a type of lighting model. It has nothing to do with final output resolution. Edit: I see where you were drawing parallels, but rendering per pixel presents a different challenge than a per area impact of a light on the G-buffer.
 
THE DETAILS DONT EXIST! You can't "take" them from anywhere, because they don't exist...

you can take the 4 1080p frames and use your fancy maths to approximate what those new details SHOULD (according to your algorithm) look like in 4k and then render the image...
Check interleaved rendering and sampling.
https://www.google.com/url?sa=t&sou...8ijinpKYCS4JkzG-Q&sig2=oZVXs0_s8Vubec00yytGzg



You can use as small rendering targets you want to get as high resolution you want.
Completely non-interleaved method would be to render image in small tiles and combine in the end.
 
Thank you for this. Much better answer here. Uprendering is basically what you see all the fancy emulators do by rendering PS1 games at a higher resolution and making it much better than we remember. Upscaling isn't making the picture quality that better, but just scaling the image to whatever resolution you want it to.

Up rendering, in terms of emulating low level graphics pipeline depending on specific memory layouts to perform rendering and post processing (exact resolution/frame buffer size as part of the variables used/dependent upon in a way) it is the safest and most compatible option. You are letting the pipeline re render the same exact frame in the way it would have normally done it just with subtle sub-pixel shifts (affecting how each pixel is calculated depending on where the sample position is, but not affecting the rest of the pipeline) and then accumulating the final frames in the host computer's memory. It is a little bit like having 4 or more PS2 running inside the PS4 and letting the PS4 synchronise them and merge their outputs... essentially something like what they were doing with the GSCube workstation (no emulation involved there though).
 
Well considering he's a computer engineer? with probable experience either fabricating or otherwise I wouldn't feel too bad. I am curious to know where your extensive knowledge stems from though Jeff.

Oh I hear ya, I absolutely love reading his posts as his knowledge is amazing I just only can understand about half of it most of the time lol
 
Uprender & they have no other choice but to uprender the older games that's set to render at 1080p if they want it to be rendered in 4K.

They don't need 2X the power of the PS4 to upscale to 4K they could just let the TV do that.


But uprendering still require a lot of power as you're rendering the native game at a higher resolution.
 
But smaller GPUs can render 4 1080P frames & the one I posted can render 4k at 60fps.
so what? its absolutely irrelevant that the garbage GPU you posted CAN render at 4k/60...the question is what can it render at that res/fr??...

it wont even touch a PS3 game at those specs, let alone a PS4 game...i promise you that

If the PS4 main GPU is doing the heavy lifting the smaller GPU could render 3 less demanding pixels to use for the up rendering.

omg...you really need to stop.

less demanding pixels? you really have no clue as to what youre talking about...im done with this discussion, youre so clueless i can't keep up with your constant spin cycle...its tiring...

Check interleaved rendering and sampling.
https://www.google.com/url?sa=t&sou...8ijinpKYCS4JkzG-Q&sig2=oZVXs0_s8Vubec00yytGzg



You can use as small rendering targets you want to get as high resolution you want.
Completely non-interleaved method would be to render image in small tiles and combine in the end.

admittedly im too tired to read that right now, but based on your comment, sure you could render an image in small tiles and stitch it together to form a monster image of super high resolution...

was it the Gran Turismo games that did this with multiple PS3's?? each PS3 would render a 1920x1080 "corner" of the screen and then stitch them together?...

thats not what has been discussed here though...what we are being told is happening is that the GPU is rendering a 1080p frame...then its taking the 3 previous 1080p frames and by shifting some pixels around, injecting some unicorn semen, and smushing them together you create some magical 4k image...

what i think youre talking about (again too tired to read right now) is rendering separate tiles (say 4 1080p tiles in the case of 4k) and then placing them next to each other to form a single 4k image...that would actually create a native 4k image, though you would need the horsepower to render all 4 at the same time...the PS4k will not be able to do this...
 
Oh I hear ya, I absolutely love reading his posts as his knowledge is amazing I just only can understand about half of it most of the time lol

Same here it sparks my interest so much. The really technical analysis makes me want to study it. I have a background in some computer science and modeling/animation. But there's something about computer engineering discussion that just... engages me. Sometimes I think I'm in the wrong field (education, haven't graduated).
 
The GPU you posted is more than an order of magnitude under powered for rendering a standard PS4 game at 4k 30fps. It essentially just supports output at the resolution and frame rate like playing a video.

Those other 3 pixels are each equally as demanding as the first. That is why uprendering to 4k is more taxing than natively rendering at 4k. If they were less demanding to create such as applying an algorithm to the original image then it would just be upscaling.

You need to look into how rasterization is done on a GPU to get a sense of just how much processing is actually being done. Try reading through this:

http://www.scratchapixel.com/lesson...plementation/overview-rasterization-algorithm



Upscaling is already incorporated into most 4k tvs. It would be pointless to have the hardware do what the tv already can.

The task in question, "up-rendering", is still rendering 8.3 million unique pixels, same as a native 4k frame. If there was something 5 times more efficient than GPUs at that task we wouldn't be using GPUs.



That is a 2006 paper on global illumination, which is a type of lighting model. It has nothing to do with final output resolution. Edit: I see where you were drawing parallels, but rendering per pixel presents a different challenge than a per area impact of a light on the G-buffer.
Kaveri uses the same generation Xtensa processors for UVD that the XB1 and PS4 use for their vision processing and codecs. From the section on UVD in Kaveri:
https://www.guru3d.com/articles-pages/amd-a10-7800-kaveri-apu-review said:
The AMD Kaveri APUs adds some new high quality video post processing features to improve video. This includes new super- resolution upscaling that can improve how SD quality video looks on HD screens, as well as how 1080P content looks on Ultra HD screens.

AMD Eyefinity Technology2 and 4K Ultra HD Support
Regardless of a semantic or any understanding or mis-understanding of rendering or up-scaling, it is accomplished with a Xtensa accelerator.

Enough already all of you.

Gesh, I try to educate you guys on what's in the game consoles and NO one listens. This will be implemented by both consoles and is a reason that AMD, XB1 and PS4 are all using the same family of Accelerator.
 
Kaveri uses the same generation Xtensa processors for UVD that the XB1 and PS4 use for their vision processing and codecs. From the section on UVD in Kaveri:Regardless of a semantic or any understanding or mis-understanding of rendering or up-scaling, it is accomplished with a Xtensa accelerator.

Enough already all of you.

Gesh, I try to educate you guys on what's in the game consoles and NO one listens. I am not guessing! This will be implemented by both consoles and is a reason that AMD, XB1 and PS4 are all using the same family of Accelerator.

Many of us appreciate it Jeff!
 
Well considering he's a computer engineer? with probable experience either fabricating or otherwise I wouldn't feel too bad. I am curious to know where your extensive knowledge stems from though Jeff.
No, I have an EE background so I can understand the terms and do lots of reading. I'm 64 so most of my experience is analog .
 
No, I have an EE background so I can understand the terms and do lots of reading. I'm 64 so most of my experience is analog .
Get the fuck out of here?! 64? I never would have guessed it. I'm in my mid (to late) 30's and I like to stay up on all the newest tech often simply because the process interests me not necessarily the actual using of the item so I imagine whatever new fangled things the kids are playing with 30 years from now will still interest me in some form or another but I can only hope that I have half the passion you do by the time I'm your age :)
 
That is a 2006 paper on global illumination, which is a type of lighting model. It has nothing to do with final output resolution. Edit: I see where you were drawing parallels, but rendering per pixel presents a different challenge than a per area impact of a light on the G-buffer.
Agreed.

Basic idea is to divert rendering at Vertex or geometry shader to get duplicated version for outputs. (Same as instanced left/right eye rendering for VR)

Anyway agree what you say it's guite silly way to do it.
Especially now that rendering is not about filling a single buffer. (Temporal, deferred etc methods make this even harder and might even cancel some of the usability by using same sample locations.)

The cost is also tremendous as you have to setup polygons several times, as well as g-buffers and all processing involved.. (Possibly even CPU stuff.)
It really might become messy quite fast.

Simply rendering into bigger buffer is both easier and faster.
 
thats not what has been discussed here though...what we are being told is happening is that the GPU is rendering a 1080p frame...then its taking the 3 previous 1080p frames and by shifting some pixels around, injecting some unicorn semen, and smushing them together you create some magical 4k image...

what i think youre talking about (again too tired to read right now) is rendering separate tiles (say 4 1080p tiles in the case of 4k) and then placing them next to each other to form a single 4k image...that would actually create a native 4k image, though you would need the horsepower to render all 4 at the same time...the PS4k will not be able to do this...

Ignoring the BS being thrown around, Panajev gave a pretty clean explanation of what it actually is:

Up rendering, in terms of emulating low level graphics pipeline depending on specific memory layouts to perform rendering and post processing (exact resolution/frame buffer size as part of the variables used/dependent upon in a way) it is the safest and most compatible option. You are letting the pipeline re render the same exact frame in the way it would have normally done it just with subtle sub-pixel shifts (affecting how each pixel is calculated depending on where the sample position is, but not affecting the rest of the pipeline) and then accumulating the final frames in the host computer's memory. It is a little bit like having 4 or more PS2 running inside the PS4 and letting the PS4 synchronise them and merge their outputs... essentially something like what they were doing with the GSCube workstation (no emulation involved there though).

That said, modern game engines are designed so that the pipeline shouldn't have an issue with a change in rendering resolution, thus it seems it would be an unnecessary cost to create the frame in this manner.

Kaveri uses the same generation Xtensa processors for UVD that the XB1 and PS4 use for their vision processing and codecs. From the section on UVD in Kaveri:Regardless of a semantic or any understanding or mis-understanding of rendering or up-scaling, it is accomplished with a Xtensa accelerator.

Enough already all of you.

Gesh, I try to educate you guys on what's in the game consoles and NO one listens. This will be implemented by both consoles and is a reason that AMD, XB1 and PS4 are all using the same family of Accelerator.

Xtensa is an accelerator for a lot of general multimedia applications, of course one is going to be included in the next iterations of the consoles as they have to handle all different quality levels of media that will need it anyway. That said if the system is already outputting native 1080p or 4k there is no point in wasting the the extra .5 watt or so to upscale that when there is a better processor for that task in the TV.

For all it matters that portion of Xtensa might as well shut off under the circumstances we were discussing. Not saying it's irrelevant, just that it wasn't pertinent to the discussion.

Edit: BTW, I appreciate the extensive write up as a fellow EE with a lot of years to catch up on.
 
What I think when Jeff and onQ post:

"Always two there are; no more, no less. A master and an apprentice."
Except with one I get the feeling that he knows what he's talking about while the other just rambles along having no clue what he just said.
 
Ignoring the BS being thrown around, Panajev gave a pretty clean explanation of what it actually is:



That said, modern game engines are designed so that the pipeline shouldn't have an issue with a change in rendering resolution, thus it seems it would be an unnecessary cost to create the frame in this manner.



Xtensa is an accelerator for a lot of general multimedia applications, of course one is going to be included in the next iterations of the consoles as they have to handle all different quality levels of media that will need it anyway. That said if the system is already outputting native 1080p or 4k there is no point in wasting the the extra .5 watt or so to upscale that when there is a better processor for that task in the TV.

For all it matters that portion of Xtensa might as well shut off under the circumstances we were discussing. Not saying it's irrelevant, just that it wasn't pertinent to the discussion.


Edit: BTW, I appreciate the extensive write up as a fellow EE with a lot of years to catch up on.
Possibly but TVs have power restrictions that the Game Console or Computer don't have so they (game consoles and Computers) can do more processing. This is why Kaveri does it rather than leaving it to the TV. Edit: Kaveri has a HDMI port in case you tried to argue that Kaveri is usually outputting to a monitor with no video processing. 4K TVs are the best monitor.

The AMD Kaveri APUs adds some new high quality video post processing features to improve video. This includes new super- resolution upscaling that can improve how SD quality video looks on HD screens, as well as how 1080P content looks on Ultra HD screens.

AMD Eyefinity Technology2 and 4K Ultra HD Support
AMD has stated that the same hardware used for HEVC codecs in the XB1 is used in AMD UVD and that is the Xtensa accelerator. It can do much much more and AMD has had them in APUs since 2010 along with the Trustzone processor as Xtensa processors/DPUs need a AXI ARM bus. ALL AMD APUs can support gesture recognition using the Xtensa accelerator. Starting with Kaveri (UVD 4.2) it can support HEVC and Carrizo (UVD 6) can support HEVC with a duty cycle allowing it to turn off part of the time while processing a HEVC video stream.
 
Except with one I get the feeling that he knows what he's talking about while the other just rambles along having no clue what he just said.

In the beginning I also thought that Jeff knows some things but after so many posts with mainly buzzwords, the connection of things that don't really have a connection and putting these into posts in a way that it obfuscates the message and his inability to express in layman terms what he tries to say made me realize that he doesn't really understand what he posts.
Not that the stuff he posts (the actual links) but his conclusions and connections for all this are just guesswork.
 
Ignoring the BS being thrown around, Panajev gave a pretty clean explanation of what it actually is:



That said, modern game engines are designed so that the pipeline shouldn't have an issue with a change in rendering resolution, thus it seems it would be an unnecessary cost to create the frame in this manner.



Xtensa is an accelerator for a lot of general multimedia applications, of course one is going to be included in the next iterations of the consoles as they have to handle all different quality levels of media that will need it anyway. That said if the system is already outputting native 1080p or 4k there is no point in wasting the the extra .5 watt or so to upscale that when there is a better processor for that task in the TV.

For all it matters that portion of Xtensa might as well shut off under the circumstances we were discussing. Not saying it's irrelevant, just that it wasn't pertinent to the discussion.

Edit: BTW, I appreciate the extensive write up as a fellow EE with a lot of years to catch up on.

I agree with you, modern engines are quite likely not to make the same usage of their video memory as the PS2 ones made of GS's eDRAM so this technique is likely over engineered for that. In terms of the scaler chip being built in or not/enabled or not for 1080p to 4K scaling, it is true that Sony has great TV's to sell, but if I cared a lot about my device video quality (maybe part of my unique selling points) I would prefer to do the scaling on chip and send a reliably great signal to the TV instead of trusting an unknown chip on the Unknown TV.

As a ECE graduate, I sympathise with lovers of both EE and CS fields :). Researched posts are appreciated and the intention is good.
 
Wow I can't believe I made it to the end of this thread. I was literally Googling words and concepts every few posts or so in order to try to keep up. Here is my layman's take away from the topics discussed.
  • Uprendering is not upscaling. It is impossible for upscaling to add information to an image whereas uprendering does
  • Uprendering to a 4K image takes more processing power than rendering to 4K directly
  • A full GPU is needed to create the extra pixels needed for uprendering. DPUs cannot take over this task
  • If the PS4K uses uprendering, it will uprender to an intermediate resolution (1440p?) and upscale to 4K from there
  • Uprendering and Dual GPUs biggest benefit, if implemented, would be seamless backwards compatibility which comes at the expense of some performance
I am unconvinced that uprendering will be used to improve the image quality of existing games. I am inclined to believe OsirisBlack, the OP, when he said:
It was also made very clear that current games would not be getting any type of performance upgrades by being played on the system and any benefits to older games would come via patch per game and per developer. When asked if this was going to happen the response was "Its a possibility but doubtful with the exception of a handful of games."
I think Sony will probably make an API or framework available to improve the image quality of games targeted towards the OG PS4 but it would require the developer's intervention. Hence the need for a patch for the improvements to be enabled. My guess is that this would instead be intended for new games going forward so they would look a bit better on the PS4K for developers who don't want to expend much effort to support it. Uprendering could be a part of that framework.
 
Up rendering, in terms of emulating low level graphics pipeline depending on specific memory layouts to perform rendering and post processing (exact resolution/frame buffer size as part of the variables used/dependent upon in a way) it is the safest and most compatible option. You are letting the pipeline re render the same exact frame in the way it would have normally done it just with subtle sub-pixel shifts (affecting how each pixel is calculated depending on where the sample position is, but not affecting the rest of the pipeline) and then accumulating the final frames in the host computer's memory. It is a little bit like having 4 or more PS2 running inside the PS4 and letting the PS4 synchronise them and merge their outputs... essentially something like what they were doing with the GSCube workstation (no emulation involved there though).

I wanted to mention the GSCube but people already think I'm crazy lol
 
Wow I can't believe I made it to the end of this thread. I was literally Googling words and concepts every few posts or so in order to try to keep up. Here is my layman's take away from the topics discussed.
  • Uprendering is not upscaling. It is impossible for upscaling to add information to an image whereas uprendering does
  • Uprendering to a 4K image takes more processing power than rendering to 4K directly
  • A full GPU is needed to create the extra pixels needed for uprendering. DPUs cannot take over this task
  • If the PS4K uses uprendering, it will uprender to an intermediate resolution (1440p?) and upscale to 4K from there
  • Uprendering and Dual GPUs biggest benefit, if implemented, would be seamless backwards compatibility which comes at the expense of some performance
I am unconvinced that uprendering will be used to improve the image quality of existing games. I am inclined to believe OsirisBlack, the OP, when he said:

I think Sony will probably make an API or framework available to improve the image quality of games targeted towards the OG PS4 but it would require the developer's intervention. Hence the need for a patch for the improvements to be enabled. My guess is that this would instead be intended for new games going forward so they would look a bit better on the PS4K for developers who don't want to expend much effort to support it. Uprendering could be a part of that framework.
Developers were not given access to many of the APIs possible with the Xtensa accelerators so launch games on Launch PS4s might benefit from this also. It's very possible that the PS4K could include a more powerful Xtensa DPU/accelerator in addition to more GPU power. They are likely going to release a new VR goggle with higher resolution which needs more processing for the video distortion and maybe better head and hand/body tracking. 1080P stereo cameras or mono with IR depth like the XB1's? 4G radio for wireless VR Goggles.

They are talking like the VR separate view on the living room TV can support multi-player and that needs more performance 2X right?
 
so what? its absolutely irrelevant that the garbage GPU you posted CAN render at 4k/60...the question is what can it render at that res/fr??...

it wont even touch a PS3 game at those specs, let alone a PS4 game...i promise you that



omg...you really need to stop.

less demanding pixels? you really have no clue as to what youre talking about...im done with this discussion, youre so clueless i can't keep up with your constant spin cycle...its tiring...



admittedly im too tired to read that right now, but based on your comment, sure you could render an image in small tiles and stitch it together to form a monster image of super high resolution...

was it the Gran Turismo games that did this with multiple PS3's?? each PS3 would render a 1920x1080 "corner" of the screen and then stitch them together?...

thats not what has been discussed here though.
..what we are being told is happening is that the GPU is rendering a 1080p frame...then its taking the 3 previous 1080p frames and by shifting some pixels around, injecting some unicorn semen, and smushing them together you create some magical 4k image...

what i think youre talking about (again too tired to read right now) is rendering separate tiles (say 4 1080p tiles in the case of 4k) and then placing them next to each other to form a single 4k image...that would actually create a native 4k image, though you would need the horsepower to render all 4 at the same time...the PS4k will not be able to do this...

Oh really?

From the 2nd GPU

rent-em-spoons-gif.gif



18mk48s7dkggojpg.jpg


18mk48s7ak2udjpg.jpg


Funny it didn't seem to be too complex for GT5 & GT6

GT6_MultiMonitor.jpeg


i1Y5SCCFZjQOhW.jpg
 
This is just a reaction move to nx or maybe to new Xbox? Otherwise why bother.

PS4 is at 40 million in 2 years plus , Sony don't give a damn about a NX or Xbox right now this is part of a bigger deal that includes 4K TV streaming , 4K Movie streaming , 4K Blu-rays , 4K TVs & so on.
 
Developers were not given access to many of the APIs possible with the Xtensa accelerators so launch games on Launch PS4s might benefit from this also. It's very possible that the PS4K could include a more powerful Xtensa DPU/accelerator in addition to more GPU power. They are likely going to release a new VR goggle with higher resolution which needs more processing for the video distortion and maybe better head and hand/body tracking. 1080P stereo cameras or mono with IR depth like the XB1's? 4G radio for wireless VR Goggles.

They are talking like the VR separate view on the living room TV can support multi-player and that needs more performance 2X right?

Sony wants to go with simplified development. They are not going to create another complex heterogeneous processor architecture that developers would need to navigate. Openings up the Xtensa seems like it would go counter to that. For example...
Cerny approached the design of the PlayStation 4 with one important mandate above all else: "The biggest thing is we didn't want the hardware to be a puzzle that programmers would be needing to solve in order to make quality titles."

http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php?print=1
Besides all that, if the Xtensa could generate pixels for display, why would they need a GPU at all? Perhaps they could put a better upscaler and antialiasing on it, but that seems about it.

However I agree that Sony will release a higher resolution VR headset which would not work on the PS4. In fact I expect Sony to ride the VR train for as long as it goes. If room level tracking takes off, they'll do that too.

What's the bandwidth and latency of 4G radio? Any solution would have to encode and decode at the very least a 1080P stream, but for an upgraded headset it'd be a lot more. That is a huge stretch to use it for VR. I could see it being used for something like Hololens since the processing is on the goggles. Only a small amount of data would need to be sent to the headset and it would be relatively to tolerant to signal interruptions. VR is far more sensitive and data hungry.
 
PS4 is at 40 million in 2 years plus , Sony don't give a damn about a NX or Xbox right now this is part of a bigger deal that includes 4K TV streaming , 4K Movie streaming , 4K Blu-rays , 4K TVs & so on.

You're leaving out a key interest Sony has. VR. If they don't create a box that will actually be able to handle modern games in VR at 60+ fps they're in trouble. They can drop settings alot to compensate but that's not the VR experience people want. If Microsoft releases a new VR ready console that pairs with Oculus and has the theoretical performance increase that AMD will bring with Polaris and their later APU'S revisions then I think they do care. Keeping users on their infrastructure is far far more valuable than selling a console. I'd imagine a single year of PS+ for them is more valuable than all of their console sales.

I know VR isn't actually that big of a deal now. But it will be this decade and its all about positioning of infrastructure and services. They want people to stay and grow their numbers because the money is in software and services. Perception could change if the Playstation hardware wise starts falling seriously behind. That's why I think they do care about what Microsoft and Nintendo are doing hardware wise.
 
You're leaving out a key interest Sony has. VR. If they don't create a box that will actually be able to handle modern games in VR at 60+ fps they're in trouble. They can drop settings alot to compensate but that's not the VR experience people want. If Microsoft releases a new VR ready console that pairs with Oculus and has the theoretical performance increase that AMD will bring with Polaris and their later APU'S revisions then I think they do care. Keeping users on their infrastructure is far far more valuable than selling a console. I'd imagine a single year of PS+ for them is more valuable than all of their console sales.

We don't yet know what is palatable for VR on consoles, yet, in terms of graphics prowess.
 
We don't yet know what is palatable for VR on consoles, yet, in terms of graphics prowess.

We know what people's expectations are. They don't want to just play gimmicky on rail shooters or simple polygon games with little depth. People want immersive and graphically breathtaking games. The PC market around VR has proven this, I don't think it's much different on console.

As far as performance we do know what the PS4 is relatively capable of regardless of what people want to believe. The engines and graphics tech as of this moment demand certain things and the PS4 does not have the computational power to handle it like they want.
 
We know what people's expectations are. They don't want to just play gimmicky on rail shooters or simple polygon games with little depth. People want immersive and graphically breathtaking games. The PC market around VR has proven this, I don't think it's much different on console.

Great.

Now quantify that.

Good luck.
 
Great.

Now quantify that.

Good luck.

Here is a game called Everspace that is planned for VR.

SelfishShamelessAsianwaterbuffalo.gif


FlakyPotableCockroach.gif


This game would need to be graphically paired down to run at 30fps on Xbox one or PS4. Without serious image quality reductions where the game would not longer resemble the parent game graphically it would not be possible to play that on PSVR at a targeted 60fps reprojection to 120.
 
You're leaving out a key interest Sony has. VR. If they don't create a box that will actually be able to handle modern games in VR at 60+ fps they're in trouble. They can drop settings alot to compensate but that's not the VR experience people want. If Microsoft releases a new VR ready console that pairs with Oculus and has the theoretical performance increase that AMD will bring with Polaris and their later APU'S revisions then I think they do care. Keeping users on their infrastructure is far far more valuable than selling a console. I'd imagine a single year of PS+ for them is more valuable than all of their console sales.

I know VR isn't actually that big of a deal now. But it will be this decade and its all about positioning of infrastructure and services. They want people to stay and grow their numbers because the money is in software and seevicds. Perception could change if the Playstation hardware wise starts falling seriously behind. That's why I think they do care about what Microsoft and Nintendo are doing hardware wise.

VR also make sense of a 2nd GPU for higher frame rates.


18mk48s7ak2udjpg.jpg
 
Top Bottom