Nintendo Switch Dev Kit Stats Leaked? Cortex A57, 4GB RAM, 32GB Storage, Multi-Touch.

Status
Not open for further replies.
Are we basically expecting a custom version of Maxwell which will use Pascal and be smaller in size? Slightly more power than standard maxwell and a little more battery efficient?
 
Its worth noting that nintendo want these NS devices to he ittrrative consoles so HBM2 would somewhat futureproof the device and probably help with compadability with the Switch 2

Unless they can get that low cost HBM from Samsung in time for the system (Which I haven't even found an estimate from Samsung for when it would be available) I don't see Nintendo risking such an expense into HBM2. One article I read and even Thraktor mentioned it's not something you would put into an entry level GPU card.

If Nintendo are just using LPDDR4 for Switch (1 memory pool). Would BC and forwards compatibility really be a problem when the next Switch iteration will likely have HBM3? (A more cheaper solution compared to HBM2?)

Are we basically expecting a custom version of Maxwell which will use Pascal and be smaller in size? Slightly more power than standard maxwell and a little more battery efficient?

Uhh... Come again?
 
The actual power of a Nintendo machine is largely irrelevant IMO. It won't be getting full AAA 3rd party support, so it doesn't need to compare to an Xbox or Playstation.

So as long as the hardware is half way respectable, that's good enough. It's a device with the Big N's full focus for 1st party software, and probably very attractive for Indie and mobile type games too.

Altogether that should add up to an attractive games machine which stands out from the rest. Something there for all types of people, and also great for core gamers as a secondary machine. That's Nintendo's market.
 
Correct. There's no way in hell that this thing is clocked at 1GHZ when used standalone. 512 Gflops is the max we can expect when docked, and around 350 when used on the go. 2x Wii U for 720p and 3x for 900p-1080p gaming on the TV. Basically the Switch is going to be weaker than what people thought the Wii U would've been (600-800gflops) 5 years ago in the WUSTs, and perfectly in line with the two latest Nintendo consoles. "Industry leading chips" and "new Gamecube" lol

And please don't even start with Volta.

It's the TX1 with a die shrink and a few minor tweaks. Pascal is pretty much a 16nm version of Maxwell with a few improvements.

Correct me if I'm wrong, but the 768GFlops number is coming from the rumored die shrink to Pascal, right? That's the number for max clocks on Pascal (1.5GHz) with the exact same SM structure as the TX1. Obviously we wouldn't see that in portable mode, but I'm not sure why it's so ridiculous that we could wind up seeing ~500GFlops in portable mode considering the die shrink does increase power efficiency by 60%.

This is assuming the final SoC is essentially a TX1 on the Pascal architecture (aka 16nm node).

The actual power of a Nintendo machine is largely irrelevant IMO. It won't be getting full AAA 3rd party support, so it doesn't need to compare to an Xbox or Playstation.

So as long as the hardware is half way respectable, that's good enough. It's a device with the Big N's full focus for 1st party software, and probably very attractive for Indie and mobile type games too.

Altogether that should add up to an attractive games machine which stands out from the rest. Something there for all types of people, and also great for core gamers as a secondary machine. That's Nintendo's market.

I don't get posts like these.

We're discussing the power for several reasons (it's fun, it's interesting, some people are very interested in tech) but beyond all that, even if the Switch doesn't wind up getting most AAA ports, what's the harm in discussing ways that Nintendo/Nvidia can lower the cost/effort barrier for those potential ports?
 
The actual power of a Nintendo machine is largely irrelevant IMO. It won't be getting full AAA 3rd party support, so it doesn't need to compare to an Xbox or Playstation.

So as long as the hardware is half way respectable, that's good enough. It's a device with the Big N's full focus for 1st party software, and probably very attractive for Indie and mobile type games too.

Altogether that should add up to an attractive games machine which stands out from the rest. Something there for all types of people, and also great for core gamers as a secondary machine. That's Nintendo's market.

This is how i'm seeing it as well. And i believe it will also get ports from mobile games. But a machine with probably at least double the power of WiiU, with the whole 1st party support concentrated on it, and being able to play on tv and on the go, seems like a hell of a secondary console.
 
I'm starting to feel like for some people the definition of AAA game is "big-budget western game that doesn't go to Nintendo platform".
AAA is the short hand the biggest / most important games of a publisher.

This is most easily measured outside of a company by the marketing spend or just unit sales.

The big games that big spending publishers release do skip Nintendo platforms a lot. (Also a reason people tried to use AAAA because a AAA for Ubisoft doesn't compare to AAA from Koch Media.)
The only notable exceptions I can think of right of the top of my head Activision with COD, Ubisoft with Assasin's Creed and now Take2 with NBA2K.
 
I could totally see the thing end up having 64bit bus/25GBps if they reduced the GPU core count to say 128. That should be a nice match for the GPU and not be a such obvious bottleneck. They also get an added benefit of increasing battery life quite a bit during gaming.

256 GFLOPS with Maxwell featureset is still a pretty decent upgrade from Wii U.

Wait, why would you expect the retail version to cut half of its GPU grunt? That follows no logical stream I can follow.
 
The actual power of a Nintendo machine is largely irrelevant IMO. It won't be getting full AAA 3rd party support, so it doesn't need to compare to an Xbox or Playstation.

So as long as the hardware is half way respectable, that's good enough. It's a device with the Big N's full focus for 1st party software, and probably very attractive for Indie and mobile type games too.

Altogether that should add up to an attractive games machine which stands out from the rest. Something there for all types of people, and also great for core gamers as a secondary machine. That's Nintendo's market.
People seem to forget that Nintendo games and third party exclusives benefit from better hardware as well.

Correct me if I'm wrong, but the 768GFlops number is coming from the rumored die shrink to Pascal, right? That's the number for max clocks on Pascal (1.5GHz) with the exact same SM structure as the TX1. Obviously we wouldn't see that in portable mode, but I'm not sure why it's so ridiculous that we could wind up seeing ~500GFlops in portable mode considering the die shrink does increase power efficiency by 60%.

This is assuming the final SoC is essentially a TX1 on the Pascal architecture (aka 16nm node).
In a way, yes, but those figures (768gflops when docked and around 512 when used standalone) make more sense with a higher number of CUDA cores at a lower clock, which seems out of the question. I expected 3SM at 1GHZ when docked and 650-700MHZ when used standalone. Nintendo likes to make reliable hardware (and thank god for that), so i wouldn't expect them to go that crazy with clockspeeds, and currently we don't even know if the active cooling kicks in only when docked.
 
The actual power of a Nintendo machine is largely irrelevant IMO. It won't be getting full AAA 3rd party support, so it doesn't need to compare to an Xbox or Playstation.

It's not really irrelevant. For instance, the design of Skyward Sword and its world was obviously limited by the Wii's modest specs. The same will be true for Breath of the Wild to some extend. The development builds are struggling with frame rate. Obviously, that by itself doesn't say much since it's still a game in development. But it shows that an open-world game like that has to balance its design with keeping the lowest acceptable performance baseline of 720p@30fps. It's not the same as a Mario Kart game where you can implement the same design on a much slower system by basically just scaling down visual features. Ideally, you want developers to have a platform that allows them to implement their vision without living on the edge of performance. The developers of Zelda will certainly appreciate the easier time they (hopefully) have on the Switch.
 
People seem to forget that Nintendo games and third party exclusives benefit from better hardware as well.


In a way, yes, but those figures (768gflops when docked and around 512 when used standalone) make more sense with a higher number of CUDA cores at a lower clock, which seems out of the question. I expected 3SM at 1GHZ when docked and ~700MHZ when used standalone. Nintendo likes to make reliable hardware, so i wouldn't expect them to go that crazy with clockspeeds, and currently we don't even know if the active cooling kicks in only when docked.

On the other hand, fewer SMs clocked higher (still not overclocked) winds up being cheaper, which seems to be a big focus for Nintendo with the Switch. I don't think a downclocked TX1 in devkits would cause the noisy cooling we've heard reports of, and people who own Shield TVs indicate that even a fully clocked (1GHz) TX1 runs with silent cooling. Which would indicate devkits are overclocked, likely to simulate the Pascal variant of the SoC.

None of us likely know what the failure rate is with a Pascal GPU running at 1.5GHz or even 1GHz, so it would be hard to judge whether or not Nintendo would see any reason to downclock the final chip, though it's certainly possible. But for cost reasons I certainly wouldn't expect any more than 2 SMs.

And I'm still adamant about $199 being on the table for the Switch.
 
To be honest if this thing could run rocket league with decent local and online multiplayer I wouldnt care what other titles it gets outside of the main Nintendo ones.

I am perfectly fine with the switch being 2 to 3x the Wii U. From a power standpoint this is a companion system as Nintendo consoles have always been.

I firmly believe it will be in the 512 - 768GFlops range.
 
To be honest if this thing could run rocket league with decent local and online multiplayer I wouldnt care what other titles it gets outside of the main Nintendo ones.

I am perfectly fine with the switch being 2 to 3x the Wii U. From a power standpoint this is a companion system as Nintendo consoles have always been.

I firmly believe it will be in the 512 - 768GFlops range.

Rocket League with Nintendo skins would be ace
 
The X1 is 512Gflop with the same 25GB/s bandwidth. So obviously Nvidia feel the Maxwell GPU with 256 cores is a ok match for that bandwidth. Plus if they did drop processing power they'd go for a lower clock rather than cutting cores.

On the flip side, Parker has double the bus width for a similarly sized GPU. Of course, that's not all that makes up the chip, but just saying. :)
 
I firmly believe it will be in the 512 - 768GFlops range.

If the rumours of the noisy fan are true, maybe they did overclock it to get the clockspeed of a Jetson TX1 to 1.5GHz. Whether that is possible, I don't know. (Because that is the claimed top (stock) performance for Parker and hence a 16nm Tegra could clock that high)

However, that would only work in docked mode seeing as it would be using around 10W or so according to the Wattage of a Tegra X1 at 1GHz. (Die Shrink increases performance or reduces power consumption but can't do both.)

So, the optimistic scenario is 768 flippity flops when docked and then halved when portable which is 384 flippity flops which would be slightly more than twice as powerful as a Wii U when portable. (Not counting the 16-bit Floating Point precision.)

Again, this is the optimistic scenario. Who knows what performance Nintendo chose and the rumour is we have dev-kit units with mediocre battery performance.
 
On the flip side, Parker has double the bus width for a similarly sized GPU. Of course, that's not all that makes up the chip, but just saying. :)

Yeah similarly sized GPU but as you say size isn't the only thing that effects GPU performance. At max Pascal has 50% more processing power than Maxwell so they would need at least 50% more bandwidth to cover that. There was no way to provide just 50% more bandwidth unless they reduced memory speed of the 128bit RAM, which is pointless as its rated at that speed.
 
Rocket League with Nintendo skins would be ace

If Nintendo isnt already on the phone with Psyonix to make a Switch version with local multiplayer and online cross play then someone at their HQ needs to be fired.The Switch supports unreal 3 / 4 so it shouldnt be an issue to get the game over.

The possibilities for portable Rocket League are awesome... Local multiplayer with friends or team up to go online.. Custom cars and Rocket trails (trails that shoot out 1ups and make the 1up noise lol)...

I cant tell you how many Rocket League players would buy a switch if it was a viable platform.
 
If the rumours of the noisy fan are true, maybe they did overclock it to get the clockspeed of a Jetson TX1 to 1.5GHz. Whether that is possible, I don't know. (Because that is the claimed top (stock) performance for Parker and hence a 16nm Tegra could clock that high)

However, that would only work in docked mode seeing as it would be using around 10W or so according to the Wattage of a Tegra X1 at 1GHz. (Die Shrink increases performance or reduces power consumption but can't do both.)

So, the optimistic scenario is 768 flippity flops when docked and then halved when portable which is 384 flippity flops which would be slightly more than twice as powerful as a Wii U when portable. (Not counting the 16-bit Floating Point precision.)

Again, this is the optimistic scenario. Who knows what performance Nintendo chose and the rumour is we have dev-kit units with mediocre battery performance.

Why would the clock speed have to be halved in portable mode? Wouldn't a Pascal GPU get 60% increased power efficiency at 1GHz, meaning it would draw 4W at 512GFlops? Maybe they do need it to be less than 4W, I guess it's possible.
 
Why would the clock speed have to be halved in portable mode? Wouldn't a Pascal GPU get 60% increased power efficiency at 1GHz, meaning it would draw 4W at 512GFlops? Maybe they do need it to be less than 4W, I guess it's possible.

You are getting the numbers mixed up. TSMC says a 16nm chip can have a 40% increase in performance or a 60% reduction in power consumption. You can't have both.

At 1.4GHz for a 16nm TX1, it would have the same Wattage as a 20nm TX1 at 1GHz.

If it was to reduce power consumption, you can get a 16nm TX1 at 1GHz with 60% less power consumption compared to a 20nm TX1 at 1GHz.

Somehow, Nvidia are claiming a 50% increase in performance for Parker so it can do 1.5GHz with its 16nm design so it's obvious that Nvidia improved the design of the chip since it's not a mere die shrink.

Edit: Why the clockspeed would have to be halved is as I said about if it was using 768 GFLOPS, it's because it went for an increase in performance over a decrease in power consumption.

If the max clockspeed is 1GHz, then they definitely could reduce power consumption but it doesn't make sense with the rumours about the noisy fan, the crap battery life and the better performance while docked.
 
Why would the clock speed have to be halved in portable mode? Wouldn't a Pascal GPU get 60% increased power efficiency at 1GHz, meaning it would draw 4W at 512GFlops? Maybe they do need it to be less than 4W, I guess it's possible.

It might not have to be but if you're going for 1080p docked and 720p mobile then you only need 45% of pixel processing performance in mobile mode. No point in providing more performance than neccesary in a mobile device which is always a balancing act between performance and battery drain
 
It might not have to be but if you're going for 1080p docked and 720p mobile then you only need 45% of pixel processing performance in mobile mode. No point in providing more performance than neccesary in a mobile device which is always a balancing act between performance and battery drain
I think that's simplifying things a little bit, for example things like polygon count don't scale down with resolution, so you can't scale purely by number of pixels rendered.
 
The GPU isn't the only component of the system.

I'm well aware of that, but considering the size of the Switch and the reports of poor battery life I would assume the total power consumption is a good deal higher than 4W. Close to 10 maybe? I don't really know.

You are getting the numbers mixed up. TSMC says a 16nm chip can have a 40% increase in performance or a 60% reduction in power consumption. You can't have both.

At 1.4GHz for a 16nm TX1, it would have the same Wattage as a 20nm TX1 at 1GHz.

If it was to reduce power consumption, you can get a 16nm TX1 at 1GHz with 60% less power consumption compared to a 20nm TX1 at 1GHz.

Somehow, Nvidia are claiming a 50% increase in performance for Parker so it can do 1.5GHz with its 16nm design so it's obvious that Nvidia improved the design of the chip since it's not a mere die shrink.

Edit: Why the clockspeed would have to be halved is as I said about if it was using 768 GFLOPS, it's because it went for an increase in performance over a decrease in power consumption.

If the max clockspeed is 1GHz, then they definitely could reduce power consumption but it doesn't make sense with the rumours about the noisy fan, the crap battery life and the better performance while docked.

I'm not quite sure that's how it works... From what I understand, Nvidia is claiming that Pascal gets a 60% increase in power efficiency for the same clock rate over Maxwell, and gets a 40% increase in performance at the same power consumption. Which means, if you clock it at 1GHz you get the 60% increase in power efficiency (10W ->4W), and if you clock it at 1.5GHz you get the 40% increase in performance (10W->10W). All theoretical of course.

Please correct me if I'm wrong about that.

It might not have to be but if you're going for 1080p docked and 720p mobile then you only need 45% of pixel processing performance in mobile mode. No point in providing more performance than neccesary in a mobile device which is always a balancing act between performance and battery drain

I get that, but I don't think Nintendo is attempting to mandate 1080p when docked for any game, especially when many XB1 games still don't reach that. I would think the smaller the gap between docked and undocked would be, the easier it is for developers to target two power levels. Of course, this is assuming the docked/undocked power level concept is a thing here, which it is rumored to be.
 
I think that's simplifying things a little bit, for example things like polygon count don't scale down with resolution, so you can't scale purely by number of pixels rendered.

I didn't say they'd drop the clock to 45% though, just that 45% of pixel processing requirements are there at 720p, as an example of why they'd drop the clock rate significantly in mobile mode. Also pixel processing will take up the large majority of GPU resources.
 
I'm not quite sure that's how it works... From what I understand, Nvidia is claiming that Pascal gets a 60% increase in power efficiency for the same clock rate over Maxwell, and gets a 40% increase in performance at the same power consumption. Which means, if you clock it at 1GHz you get the 60% increase in power efficiency (10W ->4W), and if you clock it at 1.5GHz you get the 40% increase in performance (10W->10W). All theoretical of course.

Please correct me if I'm wrong about that.

Maybe I misread it, a tech site I was looking at said something similar.

Just checked TSMC again.

TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology. Comparing with 20SoC technology, 16FF+ provides extra 40% higher speed and 60% power saving. By leveraging the experience of 20SoC technology, TSMC 16FF+ shares the same metal backend process in order to quickly improve yield and demonstrate process maturity for time-to-market value

http://www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm

Maybe it is as you said.
 
Maybe I misread it, a tech site I was looking at said something similar.

Just checked TSMC again.



http://www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm

Maybe it is as you said.

From what I understand those two things (increased performance, increased efficiency) are essentially the same thing- the smaller fab node gives you more performance per watt. You just see gains in one or the other depending on clock rate. At higher rates you gain performance, at the same/lower rate you gain power efficiency.
 
The full Netflix Client runs great on Wii U, which had much less RAM than what we're talking about here for applications. It also runs on all manner of iPhones and iPads that max out at 4GB of Ram for the highest end iPad Pros.
1 whole GB for OS
You dont need much memory to run the apps on Wii U :-)
 
If Nintendo isnt already on the phone with Psyonix to make a Switch version with local multiplayer and online cross play then someone at their HQ needs to be fired.The Switch supports unreal 3 / 4 so it shouldnt be an issue to get the game over.

The possibilities for portable Rocket League are awesome... Local multiplayer with friends or team up to go online.. Custom cars and Rocket trails (trails that shoot out 1ups and make the 1up noise lol)...



I cant tell you how many Rocket League players would buy a switch if it was a viable platform.

I need that now!
 
People aren't talking about how seamless and easy LAN parties will be with NS which can be a huge selling point for some. I can actually see some devs doing LAN exclusively for the NS because of its intuitiveness and convenience for the player. It's basically the same tech that allows you to play locally with the 3DS, wirelessly.
 
People aren't talking about how seamless and easy LAN parties will be with NS which can be a huge selling point for some. I can actually see some devs doing LAN exclusively for the NS because of its intuitiveness and convenience for the player. It's basically the same tech that allows you to play locally with the 3DS, wirelessly.

It's a huge selling point for me. I plan to use this feature a lot.
 
The numbers that you are quoting as reasonable include the numbers based on the most powerful possible SoC (Parker) at maximum clock speed. So you kinda make my point. I don't think many people have, for instance, critically questioned the maximum CPU clock speed. Everyone is assuming that the CPU cores will run at maximum clock speed and that this is the baseline of performance comparisons, when in reality its reasonable to assume that the baseline will be given by how fast the CPU will clock in mobile mode.

There has been barely any discussion in this thread about CPU clock speeds, so I don't know why you're saying "everyone is assuming that the CPU cores will run at maximum clock speed". Discussions on the topic in other threads have been largely reasonable and evidence based.

I read in the Nvidia blog that Xavier is targeting 20W so maybe it's using HBM2 to reduce power consumption.

Other than that, I just don't know if HBM2 would be cheaper for Nintendo than to just use a 128-bit bus for LPDDR4 RAM and more Cache. I can't really see HBM2 being cheaper than that and also the Switch is huge so it has space for another RAM chip.

Anyway, I don't really see Nintendo at this time choosing HBM2 when there are cheaper options, they'll likely use HBM in the future but not with the current Tegra in use.

HBM2 would almost certainly be a lot more expensive than LPDDR4 and some kind of extra L2/L3 cache, but Nintendo has taken the more expensive route with RAM on their last three home consoles, so we couldn't firmly rule it out on that basis. To be honest I think the power consumption would be the bigger issue for Nintendo, with LPDDR4's lower power draw making it more suitable for a battery-powered device.

I'd definitely say Xavier is by far the most likely culprit (possibly being used alongside a larger pool of LPDDR4 to give them a balance of bandwidth and capacity), but if there are only effectively three possibilities it's worth considering each of them.

L2 on the Maxwell portion of the TX1 is 256KB according to the TX1 whitepaper. Perhaps 8MB of shared L3 Cache between both CPU and GPU?

Edit: Here's the Whitepaper. Check Page 13. PDF Warning.

http://international.download.nvidia.com/pdf/tegra/Tegra-X1-whitepaper-v1.0.pdf

Thanks. I'm actually surprised it doesn't have a larger L2, given that Maxwell's shift to TBR would seem to be focussed mainly on bandwidth constrained scenarios like this (and this is half the L2 per ROP of the desktop Maxwell cards).

I think 8MB might be a bit much to expect, but Nintendo have dedicated 30%-40% of their last few custom dies to embedded memory, so I suppose it would actually be a bit conservative compared to their Latte or 3DS SoC designs. It's very difficult to say how well TBR bandwidth savings scale with larger cache sizes, though. For all we know there may be no benefits past 1/2MB for the kind of workloads that would be run on Switch.

OMG, now you guys are talking about Switch using Xavier? Xavier doesn't have to run on tiny batteries. It's gonna be sucking down juice more than 10W.

Manage your expectations people! You guys do this to yourselves EVERY. DAMN. TIME.

"You guys"? One person mentioned Xavier and was quickly shot down. The only unreasonable expectations I'm seeing in this thread are from posters like yourself who are expecting us all to be going crazy thinking Switch will be some kind of powerhouse, and I'm sorry to disappoint you on that front.

No it's not, and its kind of funny that you answer me saying exactly what I'm warning against, you are expecting almost double the performance than the specs suggest and some people are even more confused with the "1TF half precision".

Forget about 768 gflops, and don't think Switch is going to be 512 as a handheld, reason tells us that is going to be a system with 300-350 gflops working at 720p with maybe the possibility of having some extra performance to enable higher resolution while docked, start from there and you'll save yourselves from a potential big disappointment.

When did antonz say he was expecting 500-768 Gflops in handheld mode? When people in this thread have been talking about performance in that range, to my understanding they have always been doing so assuming that that is the peak performance (i.e. while docked), and it's not an entirely unreasonable proposition. If Switch uses a 16nm Tegra with 2 SMs, then clock speeds of anywhere up to 1.5GHz would be entirely reasonable in docked mode, given the active cooling should be able to easily dissipate the 10-15W generated. In fact I honestly don't see the point of using active cooling unless they're pushing clock speeds comfortably north of the 1GHz mark.

Thraktor, already explained months ago in the NX speculation threads how because of Nvidia's Tile-based Rasterizer tech, that if Nintendo were to use embedded RAM, they'd only need 4MB of it compared to 32MB from Wii U.

I speculated that they could get by with 4MB of L3 cache, but it's very much just speculation, and it's very difficult to say how far an increased cache could help with regards to limited main memory bandwidth without actually doing the kind of testing that Nintendo and Nvidia have surely done when designing the chip.
 
When did antonz say he was expecting 500-768 Gflops in handheld mode? When people in this thread have been talking about performance in that range, to my understanding they have always been doing so assuming that that is the peak performance (i.e. while docked)
I don't agree with this.

People are very commonly discussing performance in terms of which ports might and might not be possible. When doing that, you need to take minimum (that is, portable) performance as your metric, since every single game will have to run at those frequencies and the resultant performance.

Of course, this is particularly relevant on the CPU side.
 
There has been barely any discussion in this thread about CPU clock speeds, so I don't know why you're saying "everyone is assuming that the CPU cores will run at maximum clock speed". Discussions on the topic in other threads have been largely reasonable and evidence based.

In this thread most of the time clock speeds are implicitly assumed whenever people compare the maximum possible performance of the chips in question.

Most discussions arguing that the CPU is comparable to the Jaguars or that GPU performance relative to resolution is not terribly far away from other consoles only look at the maximum performance at maximum clock speeds and thus disregard the fact that the target for multi-platform games will be what the CPU and GPU can deliver in mobile mode without killing the batteries in one or two hours.

I don't agree with this.

People are very commonly discussing performance in terms of which ports might and might not be possible. When doing that, you need to take minimum (that is, portable) performance as your metric, since every single game will have to run at those frequencies and the resultant performance.

Of course, this is particularly relevant on the CPU side.

Exactly.
 
I also got the impression the entire 500-768 Gflops talk is what people expect the device to reach in general and not just docked. And was also kinda confused. In the older tech speculation thread people were expecting something between 300-500 Gflops for the handheld perfomance, I think.
 
I also got the impression the entire 500-768 Gflops talk is what people expect the device to reach in general and not just docked. And was also kinda confused. In the older tech speculation thread people were expecting something between 300-500 Gflops for the handheld perfomance, I think.

I don't think anyone has been suggesting 768GFlops in handheld mode...

I personally have been thinking 512GFlops in handheld mode on battery power is doable, and lines up decently with what we've heard about poor battery life, but I could be very wrong there.

Either way, 512 and 768 would be the max docked performance if using a TX1 and a Pascal based chip with 2SMs, respectively. A TX1 will not get 512GFlops on battery power if the GPU consumes 10W alone.


In the context of ports, obviously the minimum is important but we do have insiders claiming that ports from PS4/XB1 will not be much of a technical problem with the Switch, so I don't really view Flops as important in regards to if ports are technically possible. Rather, I'm viewing the GPU power as important in regards to whether or not third parties will feel like ports would be successful enough to warrant a port, and this idea contains two factors to me:

1) Is the port easy enough to create? Money/effort/time wise? A good GPU, docked or undocked would help minimize the efforts needed to port a game.

2) Will the port hold up against those on other platforms? I think the portability aspect of the games will help them sell on the Switch, but the other important aspect is that some people buy games expecting the best visuals. This is where the docked performance comes into play. Will the docked performance be good enough to create an experience similar enough, visually, to an XB1 or PS4 version? Will people refuse to buy these ports if they are downgraded visually too much?

So I think both undocked and docked performance (if that's even a thing, which isn't 100% confirmed still) can tell us a good deal about how third parties feel about the Switch.
 
I also got the impression the entire 500-768 Gflops talk is what people expect the device to reach in general and not just docked. And was also kinda confused. In the older tech speculation thread people were expecting something between 300-500 Gflops for the handheld perfomance, I think.

I keep telling you guys it won't go over 999 GFlops in any mode.
 
The actual power of a Nintendo machine is largely irrelevant IMO. It won't be getting full AAA 3rd party support, so it doesn't need to compare to an Xbox or Playstation.

So as long as the hardware is half way respectable, that's good enough. It's a device with the Big N's full focus for 1st party software, and probably very attractive for Indie and mobile type games too.

Altogether that should add up to an attractive games machine which stands out from the rest. Something there for all types of people, and also great for core gamers as a secondary machine. That's Nintendo's market.

I think that's why I can enjoy Nintendo tech spec(ulation) threads. Unlike with PS or Xbox I don't really care if the specs aren't great because I trust Nintendo to make games that look and play great.
 
Are people really expecting it to?
giphy.gif
 
I don't know If I'm too late (very probably) but did you guys saw this new leak?

parker_specifications_two__large.jpg


Leaked by Nishikawa Zenji.
 
I don't know If I'm too late (very probably) but did you guys saw this new leak?

parker_specifications_two__large.jpg


Leaked by Nishikawa Zenji.

That's just Parker. Not a leak as much as already released information from Nvidia.

I will eat some serious crow if there are Denver cores in the Switch.
 
That's just Parker. Not a leak as much as already released information from Nvidia.

I will eat some serious crow if there are Denver cores in the Switch.

I'm not thaaaaat tech guy but putting aside the CPU, what do you think about this specs?
 
I'm not thaaaaat tech guy but puting aside the CPU, what do you think about this specs?

The specs you listed are for a chip called "Parker" which is made by Nvidia and used in self driving cars. It's not a leak- those specs were released by Nvidia months ago.

Regarding the Switch, it is not using "Parker" because Nvidia themselves have said it uses a custom SoC, which "Parker" is not. So I don't know why those specs were presented as a leak for the Switch specs.


Now, the Switch MIGHT wind up using a custom SoC somewhat similar to Parker, since Parker itself is fairly similar to Tegra X1, which is what is reportedly in the Switch devkits. However, certain parts of Parker, such as the Denver CPU cores, would apparently not make any sense to be used in a game console. For a car computer they do make sense.

Either way, we've landed on a fairly commonly shared consensus that the maximum docked performance (if that's even a thing) of the Switch GPU will be between 512 and 768 GFlops, which is ~3-4x more powerful than the Wii U, not accounting for architectural/toolchain differences.
 
Status
Not open for further replies.
Top Bottom