DF - Nintendo Switch 2 Confirmed Specs: CPU, GPU, Memory, System Reservation + More

Nintendo implemented a ram management tool in the Switch 1 devs kit that allow devs to use more ram turning off some system features like video recording, that tool wasn't available at launch… I am searching for to link were i read that info.
Indeed yes, there are many RAM intensive games that turn off your ability to record video footage.
 
Wait a fucking minute

DF says 10 gigarays/s portable and 20 gigarays/s docked

vWOYaLO.jpeg


That's almost impossible

A 3080 & 2080 Ti are 10 GR/s, 2080 is 8 GR/s.

This thing has 12 RT cores at low clocks vs 3080's 68 RT cores @ 1.4 GHz.

How in the fuck

Season 8 Wtf GIF by The Office
Came back here to talk about this elephant the room... Not sure they're actually right about it and it is way too optimistic like literally out of the realm of any realistic possibility.

Edit: Here Kepler made an estimation, not sure if this is the right way to calculate this tho:
 
Last edited:
Getting a Xbox One Kinect vibes here.
Yeah I did't mention it but thought about it too. I hope that if really nobody use this feature (and I very much doubt it), just like MS with Kinect 2 Nintendo and Nvidia could be wise and free these ressources for the games.
 
It's really not and these insane notions will die rapidly as soon as launch day. The first thing is that the switch 2 is weak af. It's officially weaker than the steam deck Oled

Black Ink Crew Laughing GIF by VH1



but with access to DLSS and RT cores.

Oh, just that :rolleyes:

The first thing is ampere flops need to be divided by 2 due to the dual issue flopflation trick that delivered almost 0 performance in games.

We've had this conversation. Its groundhog day!

Average_1440p.png


Pick 3060 - 12.74 TFlops, divided by 2 according to you :rolleyes: = 6.37 TFlops
Pick 1080 - 8.873 TFlops, right? Motherfucking raw raster TFlops

You're telling me that a 3060 Cuda core hack managed to outperform a Pascal raw cuda core raster of 8.873 TFlops by 25% on 14 games average

Or at the heels of the 5700XT's 9.754 TFlops

And averages will of course vary game to game, what about more computational heavy games?

Like this one, the 3060 at the heel of a 6600XT's 10.6 TFlops in Cyberpunk
CP2077_1440p.png


Or lean even more into computation shaders with mesh shaders, why not?

performance-2560-1440.png


a 3050 at 9.098/2=4.55 TFlops outperforming the 9.754 TFlops 5700 XT

Amazing!

Maybe we should derate RDNA TFlops 🤷‍♂️

That leaves the switch 2 at just 1.5tfs when docked

oFEdQLs.jpeg
 
Oh, just that :rolleyes:



We've had this conversation. Its groundhog day!

Average_1440p.png


Pick 3060 - 12.74 TFlops, divided by 2 according to you :rolleyes: = 6.37 TFlops
Pick 1080 - 8.873 TFlops, right? Motherfucking raw raster TFlops

You're telling me that a 3060 Cuda core hack managed to outperform a Pascal raw cuda core raster of 8.873 TFlops by 25% on 14 games average

Or at the heels of the 5700XT's 9.754 TFlops

And averages will of course vary game to game, what about more computational heavy games?

Like this one, the 3060 at the heel of a 6600XT's 10.6 TFlops in Cyberpunk
CP2077_1440p.png


Or lean even more into computation shaders with mesh shaders, why not?

performance-2560-1440.png


a 3050 at 9.098/2=4.55 TFlops outperforming the 9.754 TFlops 5700 XT

Amazing!

Maybe we should derate RDNA TFlops 🤷‍♂️
What a load of rubbish. On your chart, the Rtx 3080 has 30tfs and the 6800xt has 20tfs. Is the 3080 50% faster in raster?

At the end of the day, the only thing you did in your comparison is compare unlike things. Referencing Pascal? Who is talking about pascal? The only relevant architectures here are rdna 2 and ampere. Using AW2 that is specifically developed to use Nvidia's features… Yea definitely a representation of the average performance of the GPU.

Anyone with a semblance of intelligence can see through your nonsensical comparison.
 
Last edited:
It's really not and these insane notions will die rapidly as soon as launch day. The first thing is that the switch 2 is weak af. It's officially weaker than the steam deck Oled but with access to DLSS and RT cores. The first thing is ampere flops need to be divided by 2 due to the dual issue flopflation trick that delivered almost 0 performance in games.

That leaves the switch 2 at just 1.5tfs when docked with memory bandwidth only matching the steam deck Oled at 102gb/s. When in handheld mode, things are significantly worse. With reduced memory bandwidth and reduced performance, it's so far behind pc handhelds. Embarrassing for a system releasing in 2025 and I say that as someone who has a switch 2 Mario kart bundle preorder at $700 CAD. Well behind the series S, behind the ps4 and realistically around the Xbox one S but with a better storage architecture and a dedicated file decompression block.

Honestly the switch 2 prices become more ridiculous the more you examine it. The king of misers are at it again with one of the worst unambitious systems ever released. This should have been on tsmc 5nm or at least match the 2020 next gen consoles at tsmc 7nm. At least then, it would likely run at higher clocks with better battery life….
So, in handheld mode is around 500-700 Gflops almost a Switch 1 docked 😂

G5s1wKv.jpeg
 


This guy doesn't read whitepapers clearly, K KeplerL2

UJzVTnV.jpeg


You can't even have a recipe of architecture to architecture Giga rays/s without running the same demos, and certainly nothing to compare to AMD as it depends on the geometry of their demo and only the Turing whitepaper has an indication of that demo with each geometries. In no way is AMD bound to give numbers with those numbers and I've never seen it. Anyone can go make a box in a demo and get GR/s and it means what? Compared to what? Only comparison we ever had is Nvidia's whitepapers for their own demo.


N6XZDTF.jpeg


PY5BeXj.jpeg


Unless he goes and runs these exact demos, Gigarays from his bench versus the numbers that nvidia would put on a whitepaper for an architecture cannot be compared.

Unless you think a 2060 laptop has 3.6 times the giga rays/s than Nvidia's own whitepaper on 2080 Ti.
 
I'm aware about RT, IQ, resolution, physics and frame rate but I'm not much of a techy especially pertaining to numbers like flops, memory size etc, May I ask if this a good specification?

Anyway what matters with Nintendo are there great exclusives. Just asking. Thanks.
 
Last edited:
So, in handheld mode is around 500-700 Gflops almost a Switch 1 docked 😂

G5s1wKv.jpeg
I mean, it should be obvious by now that teraflops are a useless metric to capture performance but it's the nintendo clowns that keep parroting that narrative. Might as well call the cohort, the Buggy Loop mafia. It's an apt username since the whole line of reasoning is completely buggy and stuck in a loop of rubbish.
 
I mean, it should be obvious by now that teraflops are a useless metric to capture performance but it's the nintendo clowns that keep parroting that narrative. Might as well call the cohort, the Buggy Loop mafia. It's an apt username since the whole line of reasoning is completely buggy and stuck in a loop of rubbish.

Those benchmarks I put up are gonna make you cry?

Again

performance-2560-1440.png


a 3050 at 9.098/2=4.55 TFlops outperforming the 9.754 TFlops 5700 XT RDNA 1 by 13.5%

a 3060 at 12.74/2 =6.37 TFlops outperforming the 10.6 TFlops 6600 XT RDNA 2 by 10.5%

Amazing!

Fucking Nvidia magic ;)
 
Last edited:
I'm aware about RT, IQ, resolution, physics and frame rate but I'm not much of a techy especially pertaining to numbers like flops, memory size etc, May I ask if this a good specification?

Anyway what matters with Nintendo are there great exclusives. Just asking. Thanks.
Yes, when the specs leaked some people here said that those specs were too good to be true…
 
Last edited:
I mean, it should be obvious by now that teraflops are a useless metric to capture performance but it's the nintendo clowns that keep parroting that narrative. Might as well call the cohort, the Buggy Loop mafia. It's an apt username since the whole line of reasoning is completely buggy and stuck in a loop of rubbish.
I think that you are the only people that have come to the 1.5 TF conclusion number when the Switch 2 is docked… IN THE WHOLE INTERNET… Thats may be some kind achievement.
 
We have known those specs for a very long time, but the RAM and CPU allocation is a tad disappointing.

Games look great so far though, and I hope that Nintendo will allow more flexibility when it comes to resources allocation in the future.
This console will give you overall performance of about 2/3 of a Series S when docked and a bit more than 1/3 in portable mode. Best one can hope at that price point in 2025, I guess...
 
Those benchmarks I put up are gonna make you cry?

Again

performance-2560-1440.png


a 3050 at 9.098/2=4.55 TFlops outperforming the 9.754 TFlops 5700 XT

Fucking Nvidia magic ;)
The only magically thing happening here is your ability to delude yourself.. RDNA 1 suffering due to a lack required hardware features against ampere is amazing? Sure..... Magical indeed. Meanwhile in the real world where like for like comparisons exist, we can see in the charts below that you're fishing for edge cases. Like I said, anyone with even mustard seed levels of intelligence can see through the nonsense you're spewing...

pusRNjv.png

VHHD9rF.png

Ho1C1iS.png
 
I think that you are the only people that have come to the 1.5 TF conclusion number when the Switch 2 is docked… IN THE WHOLE INTERNET… Thats may be some kind achievement.
I mean, we already know the switch 2 is weaker than the rtx 2050 mobile. DF already confirmed it and the specs were leaked out so we know it's a derivative. We know how that level of hardware performs. The only ones who are surprised are nintendo clowns.
ZLJsO3N.png
 
It's really not and these insane notions will die rapidly as soon as launch day. The first thing is that the switch 2 is weak af. It's officially weaker than the steam deck Oled but with access to DLSS and RT cores. The first thing is ampere flops need to be divided by 2 due to the dual issue flopflation trick that delivered almost 0 performance in games.

That leaves the switch 2 at just 1.5tfs when docked with memory bandwidth only matching the steam deck Oled at 102gb/s. When in handheld mode, things are significantly worse. With reduced memory bandwidth and reduced performance, it's so far behind pc handhelds. Embarrassing for a system releasing in 2025 and I say that as someone who has a switch 2 Mario kart bundle preorder at $700 CAD. Well behind the series S, behind the ps4 and realistically around the Xbox one S but with a better storage architecture and a dedicated file decompression block.

Honestly the switch 2 prices become more ridiculous the more you examine it. The king of misers are at it again with one of the worst unambitious systems ever released. This should have been on tsmc 5nm or at least match the 2020 next gen consoles at tsmc 7nm. At least then, it would likely run at higher clocks with better battery life….
Sure you don't but Digital foundry does. They have access to developers who confirm this off the record. I mean, you can inhale as much hopium as you want but the specs are the specs. We have die shots, clock speeds, it's pretty much a done deal.

Blatant lying is ban worthy trolling
 
I mean, we already know the switch 2 is weaker than the rtx 2050 mobile. DF already confirmed it and the specs were leaked out so we know it's a derivative. We know how that level of hardware performs. The only ones who are surprised are nintendo clowns.
ZLJsO3N.png
In the video Richard said that those kinds of comparisons are irrelevant when it comes to the Switch 2 (Basically all their past videos are now trash)… Also looks like he have some insight information from devs about the Switch 2 performance, his tone changed a lot about the topic… I think that you are the one that is going to be surprised next month.
 
I see so it is a decent upgrade. I might get one but probably next year or 2. I still have so many backlogs on my ps5 and will wait for a new Zelda, new Earth bound, and Metroid 4.
With the way things are going, if you want it, get it right away. I will not be surprised if it gets hit with a price increase shortly after launch due to "tariffs and rising economic conditions". It's the reason I'm buying one at launch.. That and the desire to get a first edition unit in the event that any hardware flaws are uncovered.
 
The only magically thing happening here is your ability to delude yourself.. RDNA 1 suffering due to a lack required hardware features against ampere is amazing?

And what does the 6600XT lack here then?

CP2077_1440p.png


Sure..... Magical indeed.

Not as magical as your 1/2 TFlops claim without any fucking numbers to support it

Meanwhile in the real world where like for like comparisons exist

So surely easy to prove on average that Ampere is half TFlops compared to AMD

Still waiting

, we can see in the charts below that you're fishing for edge cases.

And you're fishing your claims out of your fucking ass

Like I said, anyone with even mustard seed levels of intelligence can see through the nonsense you're spewing...

I backed with numbers.

You have a MOUNTAIN of evidence to make that claim that Switch 2 dock would be 1.5 TFlops because Ampere is half TFlops on desktop.

hint to save everyone time : you'll never be able to prove your stupid theory.
 
In the video Richard said that those kinds of comparisons are irrelevant when it comes to the Switch 2 (Basically all their past videos are now trash)… Also looks like he have some insight information from devs about the Switch 2 performance, his tone changed a lot about the topic… I think that you are the one that is going to be surprised next month.
I won't be surprised next month. I've seen the gameplay footage and the performance is just bad. I'm buying it despite the poor performance....
Blatant lying is ban worthy trolling
As much as it hurts, it's more than likely correct when comparing flops between ampere and rdna 2. You gotta multiply ampere flops by 0.5-0.67 to get a comparable flop number between architectures when it comes to rasterization. This is not new news to anyone in the pc gpu space..
 
And what does the 6600XT lack here then?

CP2077_1440p.png




Not as magical as your 1/2 TFlops claim without any fucking numbers to support it

So surely easy to prove on average that Ampere is half TFlops compared to AMD

Still waiting

And you're fishing your claims out of your fucking ass
The ramblings of an insane individual. I swear you don't even look at your graph. The 3060 has 12.6 teraflops, the 4060 has 15.11 teraflops, the 6600xt has 10.6 teraflops. Look at the performance difference between gpus? Practically non-existent. Like I said, you clearly don't know what you're talking about and only reference the two same graphs. We have an unbelievable amount of data points for all these gpus and they don't tell the story you're trying to tell. Keep pushing the incorrect narrative though like you're stuck in a buggy loop....
 
Last edited:
I won't be surprised next month. I've seen the gameplay footage and the performance is just bad. I'm buying it despite the poor performance....

As much as it hurts, it's more than likely correct when comparing flops between ampere and rdna 2. You gotta multiply ampere flops by 0.5-0.67 to get a comparable flop number between architectures when it comes to rasterization. This is not new news to anyone in the pc gpu space..
The same DF you quoted and in this very thread.

YkeViDj.jpeg


Keep going
 
The ramblings of an insane individual. I swear you don't even look at your graph. The 3060 has 12.6 teraflops, the 4060 has 15.11 teraflops, the 6600xt has 10.6 teraflops. Look at the performance difference between gpus? Practically non-existent. Like I said, you clearly don't know what you're talking about and only reference the two same graphs. We have an unbelievable amount of data points for all these gpus and they don't tell the story you're trying to tell. Keep pushing the incorrect narrative though like you're stuck in a buggy loop....

Stupidity Are You Stupid GIF


By your own fucking claim earlier in the thread, 3060 is 6.3 TFlops. YOUR CLAIM. Half TFlops.

"the 6600xt has 10.6 teraflops. Look at the performance difference between gpus? Practically non-existent."

You just fucking self owned yourself

Really time to stop posting. This is pathetic trolling.

edit, forgot to say earlier, alan wake 2 chart you were complaining that the 5700XT lacks features for mesh shaders, there's the a 3060 at 12.74/2 =6.37 TFlops outperforming the 10.6 TFlops 6600 XT RDNA 2 by 10.5%. What's the excuse here again? Your half TFlops argument is a complete dead end.
 
Last edited:
Stupidity Are You Stupid GIF


By your own fucking claim earlier in the thread, 3060 is 6.3 TFlops. YOUR CLAIM. Half TFlops.

"the 6600xt has 10.6 teraflops. Look at the performance difference between gpus? Practically non-existent."

You just fucking self owned yourself

Really time to stop posting. This is pathetic trolling.
The 3060 is 8% slower on average than the 6600xt despite having 20% more "teraflops" and 100gb/s more memory bandwidth than the 6600xt. On top of this, the 6600xt is equipped with an insufficient amount of infinity cache at only 32mbs compared to the bigger rdna2 gpus. It causes performance issues even at 1080p talk less of 1440p. I'd say do the math but you've proven that you're absolutely clueless.

I'd continue this debate but, I cannot continue a discussion with someone who doesn't even have elementary knowledge on the subject matter at hand.
 
Last edited:
Example? Almost all the games showed in the creator voice videos looks and run really good.
Cyberpunk and Hogwarts Legacy. I watch them on a 77inch oled and the flaws are so easy to see even without my glasses. When I compare them to other pc handhelds, consoles, it's just bad. And just so Buggy Loop Buggy Loop can continue his meltdowns, I'll be posting a picture of my Switch 2 when I receive it just like I did with my 2 ps5 pros, 4090, etc....
If these specs weren't as good as some want to pretend, we wouldn't be seeing them having meltdowns over them.
That certainly an interesting interpretation. One clearly rooted in delusion. If you think I'm spending $700 CAD of my own hard earned money and complaining for no reason, then I think there's nothing left to say.
 
Last edited:
Cyberpunk and Hogwarts Legacy. I watch them on a 77inch oled and the flaws are so easy to see even without my glasses. When I compare them to other pc handhelds, consoles, it's just bad. And just so Buggy Loop Buggy Loop can continue his meltdowns, I'll be posting a picture of my Switch 2 when I receive it just like I did with my 2 ps5 pros, 4090, etc....

That certainly an interesting interpretation. One clearly rooted in delusion. If you think I'm spending $700 CAD of my own hard earned money and complaining for no reason, then I think there's nothing left to say.
Both games looks really good in the creator voice videos…
 
This guy doesn't read whitepapers clearly, K KeplerL2

UJzVTnV.jpeg


You can't even have a recipe of architecture to architecture Giga rays/s without running the same demos, and certainly nothing to compare to AMD as it depends on the geometry of their demo and only the Turing whitepaper has an indication of that demo with each geometries. In no way is AMD bound to give numbers with those numbers and I've never seen it. Anyone can go make a box in a demo and get GR/s and it means what? Compared to what? Only comparison we ever had is Nvidia's whitepapers for their own demo.


N6XZDTF.jpeg


PY5BeXj.jpeg


Unless he goes and runs these exact demos, Gigarays from his bench versus the numbers that nvidia would put on a whitepaper for an architecture cannot be compared.

Unless you think a 2060 laptop has 3.6 times the giga rays/s than Nvidia's own whitepaper on 2080 Ti.
Yes there's a difference between measured throughput vs theoretical peak, but the Switch 2 numbers are clearly theoretical peak like MS did for the Xbox Series X since it's obviously not faster than the RTX 2080 Ti.
bfZLFPO.png
 
Last edited:
The 3060 is 8% slower on average than the 6600xt

1440p_Average-p.webp


3060 56 fps vs 6600XT 55 fps for 15 games at AMD Unboxed
:pie_thinking:

Not a good start of your counter argument

despite having 20% more "teraflops"

17.7% but I know.. maths

But yet at half TFlops, as per your claim, it manages to beat the 6600XT 10.6 TFlops with just 6.3 TFlops. Amazing.

So your entire argument is that it performs like a 56/55*10.6 = 10.79 TFlops GPU on average and not 12.6 TFlops.

Is this close to half TFlops?

pulp fiction drinking GIF


Yet here you are saying to everyone that Switch 2 at half rate 1.5TFlops docked (ha ha ha) is outperformed by Steam deck's 1.6TFlops RDNA 2 with that kind of logic you show above. How absolutely fucking braindead.

and 100gb/s more memory bandwidth than the 6600xt.

Oh so AMD gimps their card capabilities now :rolleyes:

On top of this, the 6600xt is equipped with an insufficient amount of infinity cache at only 32mbs compared to the bigger rdna2 gpus.

How much the 3060 has for L3 cache again? oh shit, 0 MB.

Oooops, maybe that explains the requirement of having more bandwidth. Different architectures and all. I've only said it like every pages in this thread so far?

What's the infinity cache MB do the RDNA 1.5 & 2 the consoles and Steam deck have? Seems like a very sensitive architecture and goes quickly gimped without a fat pool of cache. :pie_thinking:

It causes performance issues even at 1080p talk less of 1440p. I'd say do the math but you've proven that you're absolutely clueless.

I'd continue this debate but, I cannot continue a discussion with someone who doesn't even have elementary knowledge on the subject matter at hand.

Episode 7 Wow GIF by Wrexham AFC


You have the work to do for that half TFlops claim troll. Don't even bother replying unless you've put it on paper.
 
Both games looks really good in the creator voice videos…
Well if that's how you feel, more power to you. To give you a point of reference as to where I'm coming from. I think cyberpunk looks bad on ps5, series x, series s, steam deck, etc. This is due to low resolutions, low crowd density, the use of fsr2, inconsistent frame rates, bad TAA ghosting and a really bad ssr implementation by cdpr. It only starts looking good to me on pc with RT enabled and DLSS to address some of the many rendering issues. If I think those look bad, there's no way I can think the switch 2 looks good. Especially when many of those flaws are magnified by the lower input resolution.

That being said, I still preordered cyberpunk because I want to support CDPR as they did the right thing by not using the stupid "game key carts"…
 
I mean, it should be obvious by now that teraflops are a useless metric to capture performance but it's the nintendo clowns that keep parroting that narrative. Might as well call the cohort, the Buggy Loop mafia. It's an apt username since the whole line of reasoning is completely buggy and stuck in a loop of rubbish.
Actually it's mostly Switch 2 optimistic ppl the ones saying "don't look too much into TFLOPS" because on paper the console looks around Steam Deck in raw performance yet it cleans the floor with it comparing third party games...
 
Actually it's mostly Switch 2 optimistic ppl the ones saying "don't look too much into TFLOPS" because on paper the console looks around Steam Deck in raw performance yet it cleans the floor with it comparing third party games...
Well it's to be expected is it not? If games were coded directly for the steam deck as it were for the switch 2, I hazard that we'd see vastly different results. As for cleaning out the steam deck(Oled) in 3rd party games, there's not nearly enough data to make that declaration. That being said, I have no vested interest in the steam deck as I sold mine in the lead up to the switch 2 release.

We'll see more after the games are released. I think we'll need to see where the cuts were made to the switch 2 versions and if the steam deck even has access to those cuts for performance purposes. The steam deck is just a portable pc and the switch 2 is a console. In terms of performance, a console always gets more out of the hardware over the long run.
 
Well it's to be expected is it not? If games were coded directly for the steam deck as it were for the switch 2, I hazard that we'd see vastly different results. As for cleaning out the steam deck(Oled) in 3rd party games, there's not nearly enough data to make that declaration. That being said, I have no vested interest in the steam deck as I sold mine in the lead up to the switch 2 release.

We'll see more after the games are released. I think we'll need to see where the cuts were made to the switch 2 versions and if the steam deck even has access to those cuts for performance purposes. The steam deck is just a portable pc and the switch 2 is a console. In terms of performance, a console always gets more out of the hardware over the long run.
The thing is: What matters is how it looks and play, doesn't matter why... Switch 2 versions of games look just like current gen consoles versions but at lower frame rate and/or resolution... I'm curious to see your insight of the console when it comes out, I'll get it at lunch too
 
The thing is: What matters is how it looks and play, doesn't matter why...
Agreed, I wish people would focus on that instead of trying to play a power narrative that they're not qualified to discuss.
Switch 2 versions of games look just like current gen consoles versions but at lower frame rate and/or resolution...
I strongly disagree with this statement. However I've only looked at 4 3rd party games and half of them do not measure up. There's also no access to unedited extended gameplay footage of 3rd party games. The little snippets I've seen don't lead me to that conclusion. That said, I don't plan to purchase many 3rd party games on switch due to the excessive use of game key cards. As for first party games, it's nowhere near current gen at all. It's very far away.
I'm curious to see your insight of the console when it comes out, I'll get it at lunch too
I'm not going anywhere so I'll be here to comment for sure. I'm looking forward to Digital Foundry's analysis along with Brazil Pixel.
 
Last edited:
Yes there's a difference between measured throughput vs theoretical peak, but the Switch 2 numbers are clearly theoretical peak like MS did for the Xbox Series X since it's obviously not faster than the RTX 2080 Ti.
bfZLFPO.png

That's only the accelerate ray/box and ray/tri testers in hardware without doing BVH processing like Nvidia does and those are not giga rays/s, they are intersections/sec.
Its 1 intersection per TMU per clock.

To get to Giga rays/sec you need a number of intersections per ray and that's ray-bounce, / geometry dependent / how the BVH is efficient with how many tests you need depending on geometry and how it stalls waiting for instructions or data.

Giga rays/sec are benchmark results. There's no theoritical peak to calculate.

So your numbers you calculated kind of land neatly close to what DF mentions but if DF mentions truely Giga rays/sec, which only Nvidia uses as far as I know, then it doesn't make an ounce of sense with what you calculate.

This seems like an impossible comparison. Giga rays/s are effectively Nvidia's invention for old scientific paper ray tracing demos (stanford) and I'm not sure how Switch 2 can have those numbers be so high compared to Nvidia's own 2080Ti benchs I agree its sus, I'm sure there's a problem of communication between DF and whoever is leaking, but also they cannot be compared to intersections/sec like you did. We comparing potatoes to apples.
 
Last edited:
Top Bottom