DF - Nintendo Switch 2 Confirmed Specs: CPU, GPU, Memory, System Reservation + More

The brief snippet an the end appears to confirm no DLSS4 present is available... yet.


The Switch reserves 800MB for its software. That is a very basic suit of features and eShop and the performance and loading is simply dreadful. You can't even access a ton of features while running a game because there simply isn't enough resources and RAM for that. From what has been shown so far, the Switch 2 eShop is more feature rich, software is dramatically snappier, and it has that stream chat that needs to keep up with multiple video streams at once. All of that is going to dramatically blow up the amount of RAM required. Nintendo really wants the Switch 2 to come off as a more premium device compared to the Switch 1 and having a slow and messy OS/eShop is going to run counter to that.

Yes but the Game Chat feature uses very choppy and seemingly lower-resolution video feeds...hard to even call them feeds as that would imply full framerate, which they aren't. The eShop might be better, and I'll give it that the UI is running at a higher resolution so that will need more resources, but 3 GB is still too much IMO. The Switch 2 is supposed to have very fast internal UFS 3.1 storage, that's at least 2 GB/s bandwidth right there.

Why are they not utilizing more of the internal storage as a fast swap for OS elements, as they should be? Even if keeping 1 GB or so of OS apps & data compressed in a region of internal storage to access, decompress and populate in a swappable bank of RAM that's 2 GB in size might bring a slight performance penalty, that would absolutely be more worthwhile for devs (even some internal 1P teams I'm sure) than hogging away 3 GB for the OS.

But also possible they could scale back the amount over time, like SIE and MS have done.

DF gets this kind of information directly from SDKs and developer documentation, same as they did with the PS5 Pro, which they were spot on as well.

Maybe but DF have shown before they don't exactly parse that info the right way or in good faith, so I'm waiting for sources not intrinsically linked to IGN, to provide verification before I fully accept it.

Sorry that's just the way I handle DF and anything related to IGN these days, unless it's John's DF Retro series. Now that's good stuff. I wish NXGamer would return to their retro gaming (PC retro gaming specifically) series again on their Youtube channel, too.
 
Have they confirmed if it supports video captures? Those can take up quite a bit of memory and it's probably not coincidence that PS4/5/Xbox all reserve similar amounts.

When MS lowered the OS reservation on Series S then 4k image and video captures both got cut and it now only supports 1080p captures.
The screen recording button is there.
 
Is HDMI 2.0 chipset so much cheaper for Nintendo than 2.1 regarding the miss of VRR?

On Reddit and Socials I see quite some backlash against the potential loss of VRR.
 
Have they confirmed if it supports video captures? Those can take up quite a bit of memory and it's probably not coincidence that PS4/5/Xbox all reserve similar amounts.

When MS lowered the OS reservation on Series S then 4k image and video captures both got cut and it now only supports 1080p captures.
It's on Switch 1, I don't think they'll remove already existing features unless they have a good reason for it, and the raise of OS reserved resource says otherwise
 
What are you on about here? New game systems have always been compared to other systems and specially to the previous generation. How else would anyone have an idea of the improvements that were made?

Only since 2013 do we have consoles that could be considered similar to each other. The PS3 and X360 were very different from each other, yet that didn't stop anyone from comparing them. The same goes for all previous gens as well. By your logic, Switch 2 you not even be compared to Switch 1.

You do not feed an AMD 2012 GCN architecture like you feed a custom 2021 Ampere. Not only are they not the same company to begin with in a time where Nvidia and AMD diverted a lot in design but its not even in the same ballpark of GPU advancements. A fuckload happened between Ampere/Ada and Fermi/Kepler.

What's hard to understand?

Or do you think a Vega 64 with 2048 bit memory bus is much better than 1080's 256 bit memory bus?

I'm getting really fucking tired of repeating this information. I'll just copy/paste the old info I had because even with DF's video today barely anything has changed from my post, TFlops and all.

=====================================================
For switch 2 bandwidth :

T239 on switch 2 respects the entire Ampere lineup of the usual 25GB/s TFlops. Which leaves ~25GB/s remaining for CPU which is more than plenty on ARM A78.

With estimated TFlops from the T239 leaks

Handheld 1.7 TFlops * 25 + ~25GB/s for CPU = 67.5 GB/s → DF estimated 68.26 GB/s
Handheld 3.1 TFlops * 25 + ~25GB/s for CPU = 102.5 GB/s → DF estimated 102.4 GB/s

More examples of Ampere ~25GB/s per TFlops :

3060 @ 12.74 TFlops for 360 GB/s → 28.25 GB/s/TFlops
3070 @ 20.31 TFlops for 448 GB/s → 22.1 GB/s/TFlops
3080 @ 29.77 TFlops for 760 GB/s → 25.5 GB/s/TFlops
3090 @ 35.58 TFlops for 936 GB/s → 26.3 GB/s/TFlops
=====================================================

Its being fed with bandwidth exactly according to the modern Nvidia architectures' needs. Its a known quantity. As long as Ampere SMs are fed this ballpark of bandwidth, any argument or concern trolling that there's a bandwidth problem simply deserves rolleyes. Fucking hilarious that peoples think Nvidia engineers would gimp their 12SMs by not enough bandwidth. This is the same vibe as DF thinking that Nvidia would gimp their custom APU for Switch 2 to not have DLSS. Takes a special kind of childlike naivety.

GCN/Tahiti just can't spit out vertices worth a fucking damn because AMD decided to have 2 geometry engines for 2048 cores in that era. You NEED high bandwidth because its a massive computational pool with a pipeline the size of a garden hose. Even Nvidia's Kepler back then had 1 geometry engine (polymorph engine they called it) per 48 cuda cores. 1 per SM. Then it became 1 per 128 cuda cores in pascal. GCN had terrible SE:CU ratio (shader engine vs compute units).

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65884a1a-a6a3-433a-b5b2-b94e9f761918_2481x1393.jpeg


Kepler comparatively had 1:2 SE:CU ratio.

AMD's entire effort into RDNA was to revamp the years of consequences of trying to make GCN a thing. Tahiti was such a huge pool of cores versus primitives that it had shit occupancy. GK104 could have max occupancy in 1/5th the clocks of Tahiti. GCN issued one instruction per wave every 4 cycles while Kepler did it every cycle. Can you guess when AMD changed it to every cycle? RDNA.

You're comparing such drastically different architectures, going by hard numbers like bandwidth is meaningless.

Do you even know what Ampere brought to table compared to just Turing? I don't even need to go back in time all the way from Turing→Pascal→Maxwell→Kepler from 2012, just Turing to Ampere had
  • Improvements in concurrent operations (concurrent raster/RT/ML, which Turing was not)
  • Asynchronous barrier to keep execution units always near full occupancy
  • Ampere global memory traffic for asynchronous memory copy and reducing memory traffic
  • Also serves to hide data copy latency
Without even going into differences with old gens such as mesh shaders which eliminates a lot of memory access for geometry.

Your car analogy is equally flawed. Motor vehicles are compared to one another all the time, regardless of classification, powertrain, even number of wheels.

Sure, compare a 1000HP Hummer against the McLaren F1 then on the Nurburgring :messenger_tears_of_joy:

Or if you think a 1400HP F1 car from 1980 could outlap the <1000HP modern F1 cars

Well in terms of GPU technology at the speed it is advancing its pretty much that kind of gap.

You remind me of accountants where I work that think a transformer MVA should be comparable for an easy estimation of costs when not looking at every other factors that affect its performances.

My analogy still stands. Modern Ampere architecture which has largely remained untouched for Ada in terms of raster, is a freaking leap more efficient than AMD GCN 2012. I can't believe I even have to hit on that nail repeatedly again and again.
 
Last edited:
Is HDMI 2.0 chipset so much cheaper for Nintendo than 2.1 regarding the miss of VRR?

On Reddit and Socials I see quite some backlash against the potential loss of VRR.

which HDMI version they use is irrelevant.
the issue is the conversion from display port to HDMI, which is how they send the signal to the TV.
there's probably an issue with the conversion chip and HDMI VRR working correctly.

there's no big difference in VRR support between HDMI 2.0 and 2.1 (outside of TVs sometimes having different lower bounds for VRR depending on the HDMI version of the source)
see the LG C1 for example:

b6azuJ2.jpeg
 
Last edited:
It's on Switch 1, I don't think they'll remove already existing features unless they have a good reason for it, and the raise of OS reserved resource says otherwise
I didn't own Switch 1 but wasn't that only like 30 seconds at 720p? If they're supporting longer captures (nevermind at 4k60) then that can probably add another ~1GB to the OS reservation.
 
Last edited:
You do not feed an AMD 2012 GCN architecture like you feed a custom 2021 Ampere. Not only are they not the same company to begin with in a time where Nvidia and AMD diverted a lot in design but its not even in the same ballpark of GPU advancements. A fuckload happened between Ampere/Ada and Fermi/Kepler.

What's hard to understand?

Or do you think a Vega 64 with 2048 bit memory bus is much better than 1080's 256 bit memory bus?

I'm getting really fucking tired of repeating this information. I'll just copy/paste the old info I had because even with DF's video today barely anything has changed from my post, TFlops and all.

=====================================================
For switch 2 bandwidth :

T239 on switch 2 respects the entire Ampere lineup of the usual 25GB/s TFlops. Which leaves ~25GB/s remaining for CPU which is more than plenty on ARM A78.

With estimated TFlops from the T239 leaks

Handheld 1.7 TFlops * 25 + ~25GB/s for CPU = 67.5 GB/s → DF estimated 68.26 GB/s
Handheld 3.1 TFlops * 25 + ~25GB/s for CPU = 102.5 GB/s → DF estimated 102.4 GB/s

More examples of Ampere ~25GB/s per TFlops :

3060 @ 12.74 TFlops for 360 GB/s → 28.25 GB/s/TFlops
3070 @ 20.31 TFlops for 448 GB/s → 22.1 GB/s/TFlops
3080 @ 29.77 TFlops for 760 GB/s → 25.5 GB/s/TFlops
3090 @ 35.58 TFlops for 936 GB/s → 26.3 GB/s/TFlops
=====================================================

Its being fed with bandwidth exactly according to the modern Nvidia architectures' needs. Its a known quantity. As long as Ampere SMs are fed this ballpark of banwidth, any argument or concern trolling that there's a bandwidth problem simply deserves rolleyes. Fucking hilarious that peoples think Nvidia engineers would gimp their 12SMs by not enough bandwidth. This is the same vibe as DF thinking that Nvidia would gimp their custom APU for Switch 2 to not have DLSS. Takes a special kind of childlike naivety.

GCN/Tahiti just can't spit out vertices worth a fucking damn because AMD decided to have 2 geometry engines for 2048 cores in that era. You NEED high bandwidth because its a massive computational pool with a pipeline the size of a garden hose. Even Nvidia's Kepler back then had 1 geometry engine (polymorph engine they called it) per 48 cuda cores. 1 per SM. Then it became 1 per 128 cuda cores in pascal. GCN had terrible SE:CU ratio (shader engine vs compute units).

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65884a1a-a6a3-433a-b5b2-b94e9f761918_2481x1393.jpeg


Kepler comparatively had 1:2 SE:CU ratio.

AMD's entire effort into RDNA was to revamp the years of consequences of trying to make GCN a thing. Tahiti was such a huge pool of cores versus primitives that it had shit occupancy. GK104 could have max occupancy in 1/5th the clocks of Tahiti. GCN issued one instruction per wave every 4 cycles while Kepler did it every cycle. Can you guess when AMD changed it to every cycle? RDNA.

You're comparing such drastically different architectures, going by hard numbers like bandwidth is meaningless.

Do you even know what Ampere brought to table compared to just Turing? I don't even need to go back in time all the way from Turing→Pascal→Maxwell→Kepler from 2012, just Turing to Ampere had
  • Improvements in concurrent operations (concurrent raster/RT/ML, which Turing was not)
  • Asynchronous barrier to keep execution units always near full occupancy
  • Ampere global memory traffic for asynchronous memory copy and reducing memory traffic
  • Also serves to hide data copy latency
Without even going into differences with old gens such as mesh shaders which eliminates a lot of memory access for geometry.



Sure, compare a 1000HP Hummer against the McLaren F1 then on the Nurburgring :messenger_tears_of_joy:

Or if you think a 1400HP F1 car from 1980 could outlap the <1000HP modern F1 cars

Well in terms of GPU technology at the speed it is advancing its pretty much that kind of gap.

You remind me of accountants where I work that think a transformer MVA should be comparable for an easy estimation of costs when not looking at every other factors that affect its performances.

My analogy still stands. Modern Ampere architecture which has largely remained untouched for Ada in terms of raster, is a freaking leap more efficient than AMD GCN 2012. I can't believe I even have to hit on that nail repeatedly again and again.
giphy.gif
 
So, from what others have explained in past threads, PS5 and Xbox apparently don't offer multi-face/multi-user video in party chat. (I have a PS5 Pro but haven't tried it myself.) You apparently get a single face at most -- and only when streaming publicly.

So that aspect -- multiple faces visible during gameplay in a private party -- is a new feature on Switch 2, at least for consoles.

From what I understand, it's technically possible to replicate something like this (multi-face/multi-user video in party chat) on PC, but it's not straightforward. There's still no native setup that mirrors Switch 2's GameChat.

I don't use it personally, but from what I can gather: Discord doesn't composite gameplay behind each user's face, and doesn't support multi-user video overlay during gameplay.

Certainly not like Switch 2, where you see four AI-cropped (~30 FPS) facecams overlaid on (admittedly downscaled/~10 FPS) gameplay feeds, arranged in four windows below your main gameplay screen. I haven't been able to find any comparable Discord examples online.

You could argue that no native solution like this exists on PC because there's no demand, but we'll have to see whether this catches on and demonstrates the value of something like this, i.e., an attempt to replicate the feel of in-person multiplayer, even when everyone's remote.

Edit: someone from Era.
 
I didn't own Switch 1 but wasn't that only like 30 seconds at 720p? If they're supporting longer captures (nevermind at 4k60) then that can probably add another ~1GB to the OS reservation.
Yeah, I totally agree. Also I don't think Nintendo will allow more than that, specially in 4K
 
Getting a Xbox One Kinect vibes here.
It's hilarious that generally speaking Nintendo is delivering a no-nonsense upgrade by their standards...and yet still sort of kneecapping potential performance with this 180p/10fps video chat feature that nobody asked for...lol

Like let's be honest this isn't going to be widely used outside of maybe Mario Party.
 
Last edited:
In theory they can reduce it down to 2.5 GB or so with future updates and optimizations.
The SoC design specifications were ready, that only means they knew what the SoC would have whenever it got into production, then after that is that they could start working on utilizing it for working on the features of the console that depend on it... which is almost everything lol. They wouldn't start working on it until 2022 or 2023 imo since (and I'd appreciate some correction if this is not the case) Nvidia was still working on Ada Lovelace production pipeline, making tests, etc. So in 2021 they only had a design specifications for the SoC itself, nothing else, not the console ready to be sold or whatever, basically a blueprint of what the future hardware would bring to the table.
 
Nintendo really wants the Switch 2 to come off as a more premium device compared to the Switch 1 and having a slow and messy OS/eShop is going to run counter to that.
It is a next-gen console 8 years after the last. You didn't expect things like the eshop to be worse did you?
 
Last edited:
OS optimization and allocating more memory for games in firmware upgrades is nothing new in the industry… Sony, Nintendo and Microsoft have done it before.

There is no garanty of that kinda of optimization, to reduce that kinda of consumption. You guys are just speculating ideal numbers.

For that to happen is to admit the OS is not optimize at all, consuming more memory than the OS is supouse to consume.
 
Last edited:
Ugh stupid game chat will most likely take up most of the 3GB.
I'm pretty sure developers would prefer to use that extra ram.
 
There is no garanty of that kinda of optimization, to reduce that kinda of consumption. You guys are just speculating ideal numbers.

For that to happen is to admit the OS is not optimize at all, consuming more memory than the OS is supouse to consume.
And? That's not an issue or a crime 😂… That's why OPTIMIZATION and Firmwares upgrade exists… Series S optimization allowed to free more memory to games, Wii U did the same at one point and The Switch OS optimization allowed Video Recording… And eventually is going to happen on Switch 2.
 
Last edited:
And? That's not an issue or a crime 😂… That's why OPTIMIZATION and Firmwares upgrade exists… Series S optimization allowed to free more memory to games, Wii U did the same at one point and The Switch OS optimization allowed Video Recording… And eventually is going to happen on Switch 2.

Nobody here saying is a crime. Just to keep your expectations in check before crazy talk.
 
Has anyone figured out why the CPU clock is lower in docked mode? It's very odd.

If it's not there it's a BIG miss for Nintendo. No VRR on 40fps games is a recipe for disaster.

It makes no sense to have it for handheld and not for docked.

Big fat miss by Nintendo.

What does VRR have to do with 40fps? You just need a 120hz display for that and everything will be evenly framepaced. VRR doesn't go below 40hz anyway.
 
seems like alot of people really have high hopes on the DLSS heavylifting the switch 2.
 
Last edited:
That's interesting, I was expecting max CPU speed like the Switch 1 but not Max GPU speed, maybe in the future they can increase the GPU clock speed with an firmware update.
They always do that later on in a system's life. I remember when they gave Capcom an extra 10% of processing power with the 3DS for Monster Hunter 4, or something like that.
 
Last edited:
Who cares about performance. Don't we play nintendo games for their wonderful gameplay.

Well I'm tired of having to choose. I grew up pouring over tech specs of Nintendo Ultra 64 in Gamepro with Mario beta screenshots and basking in gameplay never possible before the new hardware. Then I learned to care about hardware even more when GoldenEye was 15fps.

Good games and hardware aren't mutually exclusive, provide what is necessary please. Because exclusive Nintendo games won't be anywhere else, a solid baseline matters.
 
Has anyone figured out why the CPU clock is lower in docked mode? It's very odd.



What does VRR have to do with 40fps? You just need a 120hz display for that and everything will be evenly framepaced. VRR doesn't go below 40hz anyway.
Damn this is so right, for some reason I was confusing VRR with Freesync Premium, so even if VRR is present, what we have to pray for is that Switch 2 falls back to Freesync Premium or Gsync on TVs when the refresh rate is below 48 Hz, which is the minimum for VRR to work... And even if VRR doesn't work, Switch 2 will probably use Freesync or Gsync anyway since it has an Nvidia SoC.

Isn't HDMI standard a little conflictive anyway? I mean, the sole reason why Linux don't send 4:4:2 or 4:4:4 signal over HDMI when using 4K@120Hz HDR is because some conflict with the entity that handles the licenses, isn't it? I'm pretty sure that even if the console was VRR capable, they wouldn't be allowed to promoted without paying the license or whatever...
 
Just remind us how many times Nintendo did save some memory system before.

You talk as OS optimization is all about memory management.
Nintendo implemented a ram management tool in the Switch 1 devs kit that allow devs to use more ram turning off some system features like video recording, that tool wasn't available at launch… I am searching for to link were i read that info.
 
Last edited:
Too many resources this game chat is taking.
I do NOT understand Nintendo at all. Nobody asked for it and there isn't hype for the feature.

Don't they have focus groups or something?

Its not even a feature that works well (10fps). It's a joke. The resources should've been spent on features people actually want, like game performance.
 
Not that was really a doubt, but it's better than a Series S
It's really not and these insane notions will die rapidly as soon as launch day. The first thing is that the switch 2 is weak af. It's officially weaker than the steam deck Oled but with access to DLSS and RT cores. The first thing is ampere flops need to be divided by 2 due to the dual issue flopflation trick that delivered almost 0 performance in games.

That leaves the switch 2 at just 1.5tfs when docked with memory bandwidth only matching the steam deck Oled at 102gb/s. When in handheld mode, things are significantly worse. With reduced memory bandwidth and reduced performance, it's so far behind pc handhelds. Embarrassing for a system releasing in 2025 and I say that as someone who has a switch 2 Mario kart bundle preorder at $700 CAD. Well behind the series S, behind the ps4 and realistically around the Xbox one S but with a better storage architecture and a dedicated file decompression block.

Honestly the switch 2 prices become more ridiculous the more you examine it. The king of misers are at it again with one of the worst unambitious systems ever released. This should have been on tsmc 5nm or at least match the 2020 next gen consoles at tsmc 7nm. At least then, it would likely run at higher clocks with better battery life….
 
Last edited:
The bandwidth and CPU limitations won't be an issue for the next 2-3 years.

Nintendo should have just slapped 16GB of RAM and called it day with plenty of headroom.

The console is going to be amazing, but it's going to sour in regard to specs in year 4 or 5.
 
I do NOT understand Nintendo at all. Nobody asked for it and there isn't hype for the feature.

Don't they have focus groups or something?

Its not even a feature that works well (10fps). It's a joke. The resources should've been spent on features people actually want, like game performance.

I guess they have collecting enough data suggesting people use Discord when playing NS online. Thats my guess.

They do have a tool to collect data. Wii U was born about that data.
 
I guess they have collecting enough data suggesting people use Discord when playing NS online. Thats my guess.

They do have a tool to collect data. Wii U was born about that data.
Yes, but nobody asked for a video feed. Normal voice chat would be good enough and save on resources.
 
Wait a fucking minute

DF says 10 gigarays/s portable and 20 gigarays/s docked

vWOYaLO.jpeg


That's almost impossible

A 3080 & 2080 Ti are 10 GR/s, 2080 is 8 GR/s.

This thing has 12 RT cores at low clocks vs 3080's 68 RT cores @ 1.4 GHz.

How in the fuck

Season 8 Wtf GIF by The Office
 
The bandwidth and CPU limitations won't be an issue for the next 2-3 years.

Nintendo should have just slapped 16GB of RAM and called it day with plenty of headroom.

The console is going to be amazing, but it's going to sour in regard to specs in year 4 or 5.
It's already sour now. Big games cannot even hold their frame rate. I've watched the cyberpunk 2077 gameplay. It's just frame drops galore from a game that's like 4 years old. Hogwarts legacy along with a host of other games have also shown some issues. Nintendo first party games are already dropping frames before the system is released. We saw this with Zelda Botw at the switch launch and we knew the system was cooked. Game chat is rumored to cause games to drop frames while displaying videos at PowerPoint levels of performance.

The same canaries are in the coal mine once again and people are trying to sweep it under the rug. It's just poor by Nintendo, the king of misers. If the system makes it past the first year without serious performance issues from multiple games, I'd be surprised. The ps5 and series x are struggling right now and people think the switch 2 will be ok.
 
Top Bottom