nemiroff
Gold Member
That's not correct. Quest 3 uses foveated rendering, but fixed foveated rendering.Quest 3 doesnt have eye tracking so no foveated reendering
That's not correct. Quest 3 uses foveated rendering, but fixed foveated rendering.Quest 3 doesnt have eye tracking so no foveated reendering
i mean, why would you even use fixed foveated reendering, it looks like shitThat's not correct. Quest 3 uses foveated rendering, but fixed foveated rendering.
Wat. That doesn't make any sense.i mean, why would you even use fixed foveated reendering, it looks like shit
Yes, it does. You move your eyes around and anything outside of the center would be worse.Wat. That doesn't make any sense.
Alright Sony, the cat's out of the bag. Just go ahead and support it officially.
I guess I'm just not understanding the Venn diagram of people who:
- Have a PC capable of running Alyx well
- Are clearly interested in VR
- Are clearly interested in Alyx
- Somehow haven't already played it on a headset nearly half the price years ago
But obviously that's going to apply to a few people that only just got into VR, I guess.
Your eyes are also increasingly shit the further from off center.Yes, it does. You move your eyes around and anything outside of the center would be worse.
Yes, it does. You move your eyes around and anything outside of the center would be worse.
Quest 3 doesnt have eye tracking so no foveated reendering and doesnt have a dedicated cable either, im not saying its shit , but not really the next gen Quest you would hope for.
imo, they should of focused on variable focus and eye tracking.
Foveated imagery is the higher rendering of an image at specific points. In a game this is typically the center of the display where a player is most often looking. Tons of articles on this online with examples.That's not FFR per se, that's the lens properties and how it affects light rays on different parts of the lens. That's why f.ex. the Oculus DK1/DK2 had horrendous iq at the periphery and tiny sweet spots even before FFR was introduced. With the correct use of a foveation map FFR you don't have to worsen the image quality (but as a developer you can if you want..). Hence the diagram I posted earlier.
If you look at actual foveation maps they are mapped to the individual lense properties for each headset. Which is also in context to why I wrote that headsets with pancake lenses will benefit more from eyetracking than headsets with fresnel lenses.
Alright Sony, the cat's out of the bag. Just go ahead and support it officially.
What the fuck is going on.. Are you for real..? That's the article I posted the image from earlier! (5%-9% performance over eye tracked FR)Foveated imagery is the higher rendering of an image at specific points. In a game this is typically the center of the display where a player is most often looking. Tons of articles n this online with examples.
Lenses matter, but Fixed Foveated Rendering is far worse than Eye-tracked.
Here's an article from UploadVR for those interested:
Here's The Exact Performance Benefit Of Foveated Rendering On Quest Pro
Quest Pro supports Eye Tracked Foveated Rendering, but exactly how much does it improve performance? If you’re not familiar with the term, eye tracked foveated rendering (ETFR) is a technique where only the region of the display you’re currently looking at is rendered in full resolution, thus...www.uploadvr.com
So back to your earlier comment about it not making sense, yes, it does.
You literally claimed above that it doesn't make any sense how the edges would look worse when looking around in a game with Fixed Foveated Rendering.What the fuck is going on.. Are you for real..? I've been doing this for twenty years. How do you not know that lenses, especially fresnel lenses are inherently blurry at the sides of center. Hence why FFR was introduced.
You'll feel like a schmuck when you realize..You literally claimed above that it doesn't make any sense how the edges would look worse when looking around in a game with Fixed Foveated Rendering.
They DO look worse outside of the detailed render area. That's a fact.
20 Years? Are YOU for real. Foveated rendering is not hard to understand and even less so to see how it works with videos showcasing it in action.
Is there any test methodology data provided for this?Analysis of the Quest Pro. FFR vs ETFR:
PCVR was entirely brute-forcing everything for the longest time because hw support to implement variable pixel-distribution in PC GPUs was a fragmented shit-show until 2019 or thereabouts. Performance was always a concern - but the most viable solution for end-users was to buy a bigger GPU.It’ll come, but it’s not a feature on PCVR even worth discussing about. PC have fixed foveated rendering for the longest time if performance is a concern.
When you lose an argument.You'll feel like a schmuck when you realize..
I'll just go ahead and put you on ignore.
Is there any test methodology data provided for this?
That chart in of itself says absolutely nothing - we're talking methods designed to save on pixel compute, I have no idea what they were measuring there when they say 'GPU', or in what conditions.
Implementation matters as well - if someone does this the naive-way like - using VRS - you can and will run into diminishing returns all over the place depending on the scene topology - and that has nothing to do with actual workload-gains, just limitations of VRS itself.
Also the statement that 'more aggressive FOV map' means 'more lossy' makes the entire comparison pointless if it's true.
The comparison only works if the quality metric is fixed (and assessing that objectively measure between the two is difficult - given that the entire point of these methods is minimizing the amount of pixel-work done while maintaining perceptual quality) - if it's not - what's to stop someone to be 'extra aggressive' and just downsample to arbitrarily nonsensical numbers.
PCVR was entirely brute-forcing everything for the longest time because hw support to implement variable pixel-distribution in PC GPUs was a fragmented shit-show until 2019 or thereabouts. Performance was always a concern - but the most viable solution for end-users was to buy a bigger GPU.
Even today - the one standard that does have broad support (VRS) is substantially limited - though it's better than situation before at least.
Terminology has been abused to hell - but what Oculus refers to as 'FFR' is really intended to be just lens-matching. What the other poster talks about - geometry of the lens means that you can efficiently redistribute pixels to mimick the distortion of the lens (aka. FFR) and have the exact same perceptual quality as if you rendered it brute-force. This is how majority of console and mobile VR has operated to date.You literally claimed above that it doesn't make any sense how the edges would look worse when looking around in a game with Fixed Foveated Rendering.
They DO look worse outside of the detailed render area. That's a fact.
I have no idea what you're even replying to here?It's not pointless, it's even described in detail in the Meta SDKs.
Maybe I misunderstood you.Terminology has been abused to hell - but what Oculus refers to as 'FFR' is really intended to be just lens-matching. What the other poster talks about - geometry of the lens means that you can efficiently redistribute pixels to mimick the distortion of the lens (aka. FFR) and have the exact same perceptual quality as if you rendered it brute-force. This is how majority of console and mobile VR has operated to date.
The reason people equate 'loss in quality' with FFR is that a lot of applications, instead of matching the lens, decide to be more 'aggressive' and cull the pixel resolution lower. But it's not - in of itself, a property of the approach, it's how developers decide to apply it.
The same applies to eye-tracking (it's supposed to be perceptually lossless - but it's entirely possible to make it lossy and still - sort of get away with it).
I have no idea what you're even replying to here?
It's not pointless, it's even described in detail in the Meta SDKs. I can't believe after all these years I'd have this type of discussion just because Sony released a headset. It's astonishing.
yep. It will not work well. Doubt they will get eye tracking and stuff to work.
Not sure why anyone on PCVR would pick this over the upcoming Quest 3
My point was that without being explicit about what you're measuring (GPU metrics aren't one number - not even close, and none of this optimizes for 'GPU' as a whole), and exactly how the two are configured (ie. the perceptual quality target should be the same between two runs - else you're comparing apples to coconuts), I have no idea what the graph is telling me.I don't see how it would be difficult to access the api and turn off and on eyetracking/ffr and measure performance between them at different map resolution level.
Lens and FOV distortion is *the* reason why FFR exists at all. If GPUs natively supported non-linear projection rendering, we wouldn't even be having a discussion about it - everyone would just plug in the lens-equation to the camera matrix and be done with it - but we don't have that.Regarding the numbnuts I tried to talk to earlier, I forgot to mention FOV distortion, which is as lense distortion itself a contributing factor to why FFR is so benefitial even without eyetracking.
Alright Sony, the cat's out of the bag. Just go ahead and support it officially.
My point was that without being explicit about what you're measuring (GPU metrics aren't one number - not even close, and none of this optimizes for 'GPU' as a whole), and exactly how the two are configured (ie. the perceptual quality target should be the same between two runs - else you're comparing apples to coconuts), I have no idea what the graph is telling me.
Second bit is that - as I alluded to in another post - implementing non-linear distribution of pixel/sample coverage can have great variance in what it does for GPU performance as well - and this is completely orthogonal to whether you use eyetracking with it or not. If I pick a geometry limited method, and my scene happens to use a lot of geometry processing, my gains will be proportionally poor. And vice versa - it's *easy* to setup demo-scenes that prove just about anything I want to them to prove.
Basically without a context of a broader statistical sample (and accounting for the combination of above variables) making blanket statements how X is Y% different from Z - is meaningless.
I said exactly the same thing when Unity demos were first shown with PSVR2 and various gains they achieved - none of those multipliers meant anything in isolation - but people were all too keen to take the highest or lowest number (depending on what they were trying to prove) and run with it.
Lens and FOV distortion is *the* reason why FFR exists at all. If GPUs natively supported non-linear projection rendering, we wouldn't even be having a discussion about it - everyone would just plug in the lens-equation to the camera matrix and be done with it - but we don't have that.
I guess I'm just not understanding the Venn diagram of people who:
- Have a PC capable of running Alyx well
- Are clearly interested in VR
- Are clearly interested in Alyx
- Somehow haven't already played it on a headset nearly half the price years ago
But obviously that's going to apply to a few people that only just got into VR, I guess.
Sony should just fucking post official PC drivers already. If they really want to push VR as a platform, they need to not be so restrictive with their own share of it.
Fair point I suppose. I didn't realize they were selling the units at a loss.Don’t they lose money on each units? Without sales from game store if they support PCVR, it must not be interesting for them financially.
I’ve googled search and I can’t find any articles that state if Sony makes or losses money on PSVR2.Don’t they lose money on each units? Without sales from game store if they support PCVR, it must not be interesting for them financially.
It's hardly an exclusive given there are multiple headsets that support it (and one released like - 4 years ago IIRC).You're free to change the direction of the discussion to be more about how to accurately measure performance, and question methods. That's legit. But my journey into this topic was as a comment against the perceived notion that ETFR is an excusive holy grail of performance boosting for VR. That's all.
I appreciate the reasoning for first point - but your 2nd one is what my argument was all about. I don't think blanket statements in either direction are meaningful without context. And said context for the Oculus benchmarks is pretty specific to their hw/sw combination, at given point in time.1. It kinda isn't
2. It's contextual.
We obviously agree here - that's what I was referencing in the first post. When benchmark alludes to variable quality settings it already sets off alarms for me because I have no idea how they're measuring that. Ie. if we're gonna make statements about benefits of different optimization techniques - we either need to affix quality, or performance and observe the other metric in isolation to get a sense of trade-offs. Moving both - just makes the whole thing a jumble of noise.Hence my reference to lens tech, lens properties (and later FOV warping). It's also important to take into consideration the intersection line between performance and image quality which is all in the hands of the developer.
That helps as context, but also makes my point for why it's not useful for broader comparisons across different hw/sw stacks.The graph itself was taken from a talk Meta held about optimizing performance for the Quest headsets. In fact the segment it was taken from was clearly more of a promotion of the newly implemented ETFR for the Quest Pro in their SDK rather than FFR per se.
It's hardly an exclusive given there are multiple headsets that support it (and one released like - 4 years ago IIRC).
It is borderline useless on PC - but that applies to most forms of similar optimization on PC, including 'FFR', due to aforementioned hardware fragmentation. Closed boxes are the primary benefactors here, so consoles and Quest, pretty much.
I appreciate the reasoning for first point - but your 2nd one is what my argument was all about. I don't think blanket statements in either direction are meaningful without context. And said context for the Oculus benchmarks is pretty specific to their hw/sw combination, at given point in time.
We obviously agree here - that's what I was referencing in the first post. When benchmark alludes to variable quality settings it already sets off alarms for me because I have no idea how they're measuring that. Ie. if we're gonna make statements about benefits of different optimization techniques - we either need to affix quality, or performance and observe the other metric in isolation to get a sense of trade-offs. Moving both - just makes the whole thing a jumble of noise.
That helps as context, but also makes my point for why it's not useful for broader comparisons across different hw/sw stacks.
To give a specific example of a similar thing - Sony had benchmarks of 4-5 different FFR/lens-matching optimizations for PSVR back in 2016 (The reason there were multiple approaches is that PS4 hw had no direct way to do variable render-target resolution, so each approach had different trade-offs. PS4Pro added hw for it - but that would only benefit 5-10% of your userbase).
Now - the best case for optimization on PSVR lens (assuming we are targeting no-quality loss compared to brute-force/naive render) was somewhere around 2.2x. The different techniques landed all over the place (on the same demo-level scenario) from 1.2 to close to 2.0. All of them were doing the same kind of optimization - ie. FFR, but savings were variable for the specific test-case.
On a retail title I worked on, we used one of said techniques, as it fit reasonably well to the rendering pipeline we were working with. Contrary to Sony's own benchmark (which put it second to last in terms of raw-performance increase) we were getting approximately a 1.8-2.x performance on average, so very different results. Not because there was attempts at misleading - the types of content rendered, combined with the specific of the rendering pipeline simply yielded different returns.
TLDR - sweeping generalizations made about graphics optimizations, are more often wrong than not - and we're really looking at statistical spectrum with any of these.
My personal view on ETFR itself, is that we're still in very immature stages when it comes to production codebases(and that's where the impact is actually measured for the end-user), and we're up against 30 years of rendering pipeline evolution that went in a different direction, and that's just the software bit. And frankly - we still struggle with many of the basics in VR pipelines - so I never expected we'd be getting big returns early on with this either.
And on hw front there may well be issues as well - admittedly I've not really followed that closely what current hw is actually capable off vs. theoretical limits of where we want it to be.
But to the point - academic research precedent shows theoretical best case is orders of magnitude removed from just rendering the scene statically - but getting the software and hardware to that point may well be far in the future.