Nintendo Switch Dev Kit Stats Leaked? Cortex A57, 4GB RAM, 32GB Storage, Multi-Touch.

Status
Not open for further replies.
Why? Seems pple are always choosing the negatives rumours when it concerns Nintendo. Suddently they all forget the machine is rumoured to be running Dark souls 3.

It's unlikely that the thing will be Wii U level even in portable mode given what we've seen from similar hardware already, it'll be a noticable increase in power undocked too. Thinking that it'll hit Wii U levels in portable mode *is* negative thinking lol

Edit:^^^ Ignore that, I got confused for a sec. But I agree with the general sentiment that most seem to actively want the machine to be bad so they can be disappointed lol


The developers didnt exactly give it a glowing review on performance. They stated it was running at “a level of performance they are happy with.” If the performance was amazing they wouldnt be waiting until they see the install base to port it.

Vita vs 3DS. That is all I have to say on that point lol.
 
If your baseline expectations for the Switch are lower than the capabilities of the Shield tablet that already exists and you can go out and buy and use with your very own hands and see with your very own eyes, you really need to examine your own confirmation biases.
 
The developers didnt exactly give it a glowing review on performance. They stated it was running at “a level of performance they are happy with.” If the performance was amazing they wouldnt be waiting until they see the install base to port it.

The Studio has officially said nothing. The Leak says someone who has spoke to them says the game is running at a performance they are happy with. That same report says they are considering a trilogy release. There is nothing there about waiting to see an install base
 
The Studio has officially said nothing. The Leak says someone who has spoke to them says the game is running at a performance they are happy with. That same report says they are considering a trilogy release. There is nothing there about waiting to see an install base

http://letsplayvideogames.com/2016/...-from-software-considering-trilogy-rerelease/

Development on a Switch port has been underway for several months via a small team, with From Software waiting to see initial sales data before committing to producing ports. The plan would be for a Switch rerelease of Dark Souls 3 if greenlit to release the same day as PS4, Xbox One and PC receive versions with all DLC included.
 
500gflops portable and 1tflop docked is the absolute maximum I'm expecting.

I'm going for 352gflops portable and 704gflops docked as my prediction.

I don't think the difference will be that high between portable and console mode. As far as we know, only the resolution changes between these two.

I would expect more something in the line of 512gflops portable and, as you said, around 700-750gflops in docked.
 
I don't think the difference will be that high between portable and console mode. As far as we know, only the resolution changes between these two.

I would expect more something in the line of 512gflops portable and, as you said, around 700-750gflops in docked.

If the resolution changing is the only driver here (it likely isn't) then you'd need a clock change of ~2.25x right? Which would be about 300Gflops to 700Gflops.
 
I don't think the difference will be that high between portable and console mode. As far as we know, only the resolution changes between these two.

I would expect more something in the line of 512gflops portable and, as you said, around 700-750gflops in docked.

720p > 1080p is more than twice as many pixels, you need pretty much twice as much power to increase the resolution to 1080p, assuming of course you want native 1080p for docked.
 
Can somebody explain to me how it is that multiple sources have confirmed the dev kit specs in the OP to be correct, and yet people are sull expecting the final unit to to be 1.5-2x as powerful? Moreover, expecting higher performance than the watercooled Tegra Parker chip? Stop clinging on to that 1 TFLOPS rumor; it's obviously referring to FP16, and the only counterarguments I've seen require that the writer of that article has actual technical knowledge. From the article, that's clearly not the case. We were finally getting realistic, then this poorly written article comes along and both sides turn into something ridiculous.

Based on the rumors, we are almost certainly getting 512 GFLOPS docked. Expecting more is setting yourself up for disappointment. Nintendo likely chose this because there's already a chip available that matches it to use for development, while going for more would have made it hard to hit an early 2017 release (or even more so, the likely late 2016 original target that's likely the reason they couldn't go with Pascal in the first place). It's possible that it's a bit faster than that, but not 50% faster and certainly not 100%. Set your expectations for 256-384 on the go and 512 docked. Anything more is gravy.

To ease some of the disappointment, between the use of FP16 to give a boost in efficiency, the fact that Nintendo's customization is likely centered around increasing memory bandwidth, and just the typical advantages of console optimization, it's likly that what we end up with will be able to match or exceed this in real-world use: https://www.youtube.com/watch?v=aEHhOmlyhJQ&t=395s

Is that not good enough for what it is? Some games would run at sub-HD but it's really not terrible. Who knows, maybe they will even be able to add 100-200MHz to it and squeeze out something even better.

Edit: More examples: https://www.youtube.com/watch?v=jxAeIl-JX48 Note that most of these are running at medium settings, and that the 940M has only 14.4 GB/s of available memory bandwidth.
 
Based on the rumors, we are almost certainly getting 512 GFLOPS docked.

Because, unlike you, we don't declare what rumors we like to be correct and what rumors we don't like to be bunk, and then like to discuss what makes sense and what doesn't.

Such as an actively cooled handheld mode in lieu of what the Pixel C can accomplish at passive with a stock X1.
 
...

To ease some of the disappointment, between the use of FP16 to give a boost in efficiency, the fact that Nintendo's customization is likely centered around increasing memory bandwidth, and just the typical advantages of console optimization, it's likly that what we end up with will be able to match or exceed this in real-world use: https://www.youtube.com/watch?v=aEHhOmlyhJQ&t=395s

Is that not good enough for what it is? Some games would run at sub-HD but it's really not terrible. Who knows, maybe they will even be able to add 100-200MHz to it and squeeze out something even better.

Edit: More examples: https://www.youtube.com/watch?v=jxAeIl-JX48 Note that most of these are running at medium settings, and that the 940M has only 14.4 GB/s of available memory bandwidth.
The level of performance between a 940M - 950M is what I think the device in "Console Mode" might reach. The 940M memory bus width is only 64bits (single-channel DDR3). The 950M is 128 bits width, but with 6 SM cores which I don't see happening. The core configuration should also be limited with the Switch, but ultimately we just don't know what Nintendo and Nvidia has been gunning for.

https://en.wikipedia.org/wiki/List_...ocessing_units#GeForce_900M_.289xxM.29_series
 
Can somebody explain to me how it is that multiple sources have confirmed the dev kit specs in the OP to be correct, and yet people are sull expecting the final unit to to be 1.5-2x as powerful? Moreover, expecting higher performance than the watercooled Tegra Parker chip? Stop clinging on to that 1 TFLOPS rumor; it's obviously referring to FP16, and the only counterarguments I've seen require that the writer of that article has actual technical knowledge. From the article, that's clearly not the case. We were finally getting realistic, then this poorly written article comes along and both sides turn into something ridiculous.

Based on the rumors, we are almost certainly getting 512 GFLOPS docked. Expecting more is setting yourself up for disappointment. Nintendo likely chose this because there's already a chip available that matches it to use for development, while going for more would have made it hard to hit an early 2017 release (or even more so, the likely late 2016 original target that's likely the reason they couldn't go with Pascal in the first place). It's possible that it's a bit faster than that, but not 50% faster and certainly not 100%. Set your expectations for 256-384 on the go and 512 docked. Anything more is gravy.

To ease some of the disappointment, between the use of FP16 to give a boost in efficiency, the fact that Nintendo's customization is likely centered around increasing memory bandwidth, and just the typical advantages of console optimization, it's likly that what we end up with will be able to match or exceed this in real-world use: https://www.youtube.com/watch?v=aEHhOmlyhJQ&t=395s

Is that not good enough for what it is? Some games would run at sub-HD but it's really not terrible. Who knows, maybe they will even be able to add 100-200MHz to it and squeeze out something even better.

Edit: More examples: https://www.youtube.com/watch?v=jxAeIl-JX48 Note that most of these are running at medium settings, and that the 940M has only 14.4 GB/s of available memory bandwidth.


I dont think you realize how similar Pascal is to Maxwell. It is entirely possible Nintendo started working on this project at the inception or release of Maxwell based shield products. When they made an agreement with Nvidia they were informed about Pascal and that it could be ready near the release of the Switch. Pascal is essentially die shrunk Maxwell so you arent recreating the wheel.

Most dev kits in the past correct me if im wrong dont represent the final SOC or hardware used in the retail product. It is there to emulate the performance and specifications of the final customized product as best as possible. The ps4 and x1 were emulated with PCs with roughly the same specs as the final machines to give developers enough time to adjust to the type of performance and hardware to expect.

So it makes complete sense that Maxwell is used as framework for the final retail product that could be pivoting towards Pascal since the architecture is roughly the same. It is entirely possible that everything around the system like the RAM and Screen resolution of the tablet is locked in and all that is changing is the performance of the hardware. This would explain why the alleged battery life has been in flux or a moving target. Pascal has efficiency implications that would affect battery life.
 
Because, unlike you, we don't declare what rumors we like to be correct and what rumors we don't like to be bunk, and then like to discuss what makes sense and what doesn't.

Such as an actively cooled handheld mode in lieu of what the Pixel C can accomplish at passive with a stock X1.

The Pixel C has more surface area and throttles, with the max speed of it being 850MHz (435.2 GFLOPS). Even being matching that consistently would require a fan if it's on a 20nm process. To be fair, I did remember it incorrectly, so I'll say that on the optimistic side we might see something like 435 GFLOPS on the go and 614 docked, but it's really not realistic or supported by any rumors to expect more. Speaking of which, what rumors am I declaring bunk, exactly? And what makes you think that I want this to be weak when I've said that I'm buying it day one multiple times? I'm trying to be realistic here. You people are the ones cutting out rumors you don't like and reading things the way that you want to by believing that that it must be 1 TFLOPS in single-precision because it's what makes you happy.

I knew that anything I said would be a waste of time, so I'll just wait and say "I told you so" when the time comes.

I dont think you realize how similar Pascal is to Maxwell. It is entirely possible Nintendo started working on this project at the inception or release of Maxwell based shield products. When they made an agreement with Nvidia they were informed about Pascal and that it could be ready near the release of the Switch. Pascal is essentially die shrunk Maxwell so you arent recreating the wheel.

Most dev kits in the past correct me if im wrong dont represent the final SOC or hardware used in the retail product. It is there to emulate the performance and specifications of the final customized product as best as possible. The ps4 and x1 were emulated with PCs with roughly the same specs as the final machines to give developers enough time to adjust to the type of performance and hardware to expect.

So it makes complete sense that Maxwell is used as framework for the final retail product that could be pivoting towards Pascal since the architecture is roughly the same. It is entirely possible that everything around the system like the RAM and Screen resolution of the tablet is locked in and all that is changing is the performance of the hardware. This would explain why the alleged battery life has been in flux or a moving target. Pascal has efficiency implications that would affect battery life.

I probably realize it more than you do. It's a question of if they would have been able to have a chip ready for a November 2016 lauch (the likely original target). None of us can answer that definitively.
 
The Pixel C has more surface area and throttles, with the max speed of it being 850MHz (435.2 GFLOPS). Even being matching that consistently would require a fan if it's on a 20nm process. To be fair, I did remember it incorrectly, so I'll say that on the optimistic side we might see something like 435 GFLOPS on the go and 614 docked, but it's really not realistic or supported by any rumors to expect more. Speaking of which, what rumors am I declaring bunk, exactly? And what makes you think that I want this to be weak when I've said that I'm buying it day one multiple times? I'm trying to be realistic here. You people are the ones cutting out rumors you don't like and reading things the way that you want to by believing that that it must be 1 TFLOPS in single-precision because it's what makes you happy.

I knew that anything I said would be a waste of time, so I'll just wait and say "I told you so" when the time comes.



I probably realize it more than you do. It's a question of if they would have been able to have a chip ready for a November 2016 lauch (the likely original target). None of us can answer that definitively.

Quoting you so we all see how it blows in your face if it turns out better than your "expectations"
 
I probably realize it more than you do. It's a question of if they would have been able to have a chip ready for a November 2016 lauch (the likely original target). None of us can answer that definitively.

I'm curious why you believe manufacturing cut off was November 2016, when it doesn't seem like they're ramping up for a huge launch with a publicly stated shipping target of 2 million?

If they can't manufacture more than 100k units a week thats a bigger problem than anything architecture related.
 
The Pixel C has more surface area and throttles, with the max speed of it being 850MHz (435.2 GFLOPS). Even being matching that consistently would require a fan if it's on a 20nm process. To be fair, I did remember it incorrectly, so I'll say that on the optimistic side we might see something like 435 GFLOPS on the go and 614 docked, but it's really not realistic or supported by any rumors to expect more. Speaking of which, what rumors am I declaring bunk, exactly? And what makes you think that I want this to be weak when I've said that I'm buying it day one multiple times? I'm trying to be realistic here. You people are the ones cutting out rumors you don't like and reading things the way that you want to by believing that that it must be 1 TFLOPS in single-precision because it's what makes you happy.

I knew that anything I said would be a waste of time, so I'll just wait and say "I told you so" when the time comes.

I probably realize it more than you do. It's a question of if they would have been able to have a chip ready for a November 2016 lauch (the likely original target). None of us can answer that definitively.

What you're saying here (~400 max portable, ~600 max docked) is incredibly reasonable. I'm not sure why people are disagreeing so much. Our speculation for the past several months has landed at 500-700Gflops max when docked, so this is essentially within that range.

Are people actually taking that 1TFlop claim from the Venture Beat article as a serious goal for FP32? I thought we were just pointing to that claim as a reason to ignore the article in its entirety, and to essentially show how stupid some of the posts in that thread were ("LOL Nintendo, Wii U 2.0" ...when the article says it's more powerful than any of us are expecting).

I don't think anyone is expecting 1TFlop of FP32, though 500-700GFlops on an Nvidia chip properly could be fairly close to XB1 performance if developers take proper advantage of the chip (FP16 code for instance).

EDIT: That said I'm certainly hoping for more, maybe more SMs or a higher clock speed due to 16nm process or something, but I think this is a perfectly fine baseline for realistic expectation.
 
I probably realize it more than you do. It's a question of if they would have been able to have a chip ready for a November 2016 lauch (the likely original target). None of us can answer that definitively.


Maybe this is the reason it took so long to announce and why they pushed it to exactly the 6 month mark before release?? If November was the target and it is now March what do you attribute to that?
 
Speculation: I'm starting to wonder if the GPU is indeed configured just like a Tegra or if it has some proprieties that are more akin to GeForce chips.
 
Maybe this is the reason it took so long to announce and why they pushed it to exactly the 6 month mark before release?? If November was the target and it is now March what do you attribute to that?

More time for games and 3rd party games in particular, they've even talked about this
 
What you're saying here (~400 max portable, ~600 max docked) is incredibly reasonable. I'm not sure why people are disagreeing so much. Our speculation for the past several months has landed at 500-700Gflops max when docked, so this is essentially within that range.

I am agreeing with that range of 500 to 700 .. I just believe it is entirely possible Pascal is how they are going to achieve it with the battery life targets we have been hearing as of late. The whole concept relies on the battery life being good.
 
I'm not the most technically-minded guy out there, assuming those OP numbers are 100% accurate, how does that compare to Wii U and XBO/PS4?
 
400+ GFlops for portable, and 600 GFlops docked is more than I expected. Id be happy with that.

This thing would be stronger that the wiiu even on portable. Considering what they did with the wiiu, and 3ds. 😮
Please make enough of them Ninty🙏
 
I am agreeing with that range of 500 to 700 .. I just believe it is entirely possible Pascal is how they are going to achieve it with the battery life targets we have been hearing as of late. The whole concept relies on the battery life being good.

If by Pascal you mean 16nm then yeah, I agree that it makes a lot more sense for them to go with a 16nm process whether Maxwell or Pascal. We just don't know at this point, and that Venture Beat article rustled a lot of feathers here apparently without saying anything at all about the process node.

I'm not the most technically-minded guy out there, assuming those OP numbers are 100% accurate, how does that compare to Wii U and XBO/PS4?

The numbers in the OP are wrong and some are a bit nonsensical (1024Flops/cycle?) but the reasonable expectations based on those numbers bring us to (in real world performance of CPU GPU and RAM, not raw power) something like 5-6x Wii U, maybe 75-90% of XB1.
 
I'm curious why you believe manufacturing cut off was November 2016, when it doesn't seem like they're ramping up for a huge launch with a publicly stated shipping target of 2 million?

If they can't manufacture more than 100k units a week thats a bigger problem than anything architecture related.

Maybe this is the reason it took so long to announce and why they pushed it to exactly the 6 month mark before release?? If November was the target and it is now March what do you attribute to that?

The delay was likely a software issue. Changing the architecture late-ish in the game is certainly possible, but unlikely.

What you're saying here (~400 max portable, ~600 max docked) is incredibly reasonable. I'm not sure why people are disagreeing so much. Our speculation for the past several months has landed at 500-700Gflops max when docked, so this is essentially within that range.

Are people actually taking that 1TFlop claim from the Venture Beat article as a serious goal for FP32? I thought we were just pointing to that claim as a reason to ignore the article in its entirety, and to essentially show how stupid some of the posts in that thread were ("LOL Nintendo, Wii U 2.0" ...when the article says it's more powerful than any of us are expecting).

I don't think anyone is expecting 1TFlop of FP32, though 500-700GFlops on an Nvidia chip properly could be fairly close to XB1 performance if developers take proper advantage of the chip (FP16 code for instance).

EDIT: That said I'm certainly hoping for more, maybe more SMs or a higher clock speed due to 16nm process or something, but I think this is a perfectly fine baseline for realistic expectation.

It's because I've made a lot of enemies. I expect certain people to hate on anythjing I say no matter what. It doesn't bother me, I know it's my fault.
 
The delay was likely a software issue. Changing the architecture late-ish in the game is certainly possible, but unlikely.

Final devkits going out last month would say its not a software issue only. Otherwise, they'd have gone out much sooner so that more software could be finalized sooner rather than later.

That's just 256 Cuda Cores * 2 FP32 floating point ops per cycle per Cuda core * double rate FP16.

FLOPS/cycle, the unit representation itself, is weird. It'd just be FLOPS.
 
The delay was likely a software issue. Changing the architecture late-ish in the game is certainly possible, but unlikely.

I feel like a broken record here, but they certainly didn't have to change the architecture to take advantage of the increased efficiency of a 16nm process. The availability of the 16nm process could be part of the reason for the delay, there is really no way to know at this point.

That's just 256 Cuda Cores * 2 FP32 floating point ops per cycle per Cuda core * double rate FP16.

What I mean is "1024 Flops/cycle" isn't an appropriate unit here. Floating point operation per second per cycle doesn't make sense... Unless I'm mistaken?
 
The problem with the idea of 16nm is that the rumor that had been going around the past year is that Nvidia had too much 20nm, and were doing a file-sale of sorts for it.

While its entirely possible that rumor is/was wrong/incorrect/fud, it would have to be totally wrong for 16nm to be in the Switch.
 
The problem with the idea of 16nm is that the rumor that had been going around the past year is that Nvidia had too much 20nm, and were doing a file-sale of sorts for it.

While its entirely possible that rumor is/was wrong/incorrect/fud, it would have to be totally wrong for 16nm to be in the Switch.

Can you give a source for that rumor, because from what I remember it was 100% a theory presented here by Thraktor, rather than a rumor based on any sort of inside information.

Splitting hairs ;) I think it's just understood, and really pointless to complain about (of all things).

My point from bringing it up is that it's a good reason to throw out that list in the OP. When you have nonsense type units like that it's hard to trust that the person who came up with this list has much technical knowledge.

(Again, unless this is an actual way to represent that, although it sure doesn't seem to be)
 
The delay was likely a software issue. Changing the architecture late-ish in the game is certainly possible, but unlikely.

Given rumours are still discussing battery life targets not final prototype battery life, I'm not sure its a given that final production hardware has been locked down for full manufacturing yet.
e: I mean, if nothing else, actual battery size / capacity versus cost could still be being weighed - the internal battery storage space for the WiiU gamepad being vastly larger than the batteries it ended up shipping with suggests that that was a late change to the manufacturing design.
 
(Again, unless this is an actual way to represent that, although it sure doesn't seem to be)

It's not, but we don't know the context of who wrote it and for whom (if they really needed the clarification about per clk, per sec etc).

What's curious is how they can arrive at 14.4 GP/clk fillrate with tex fillrate @ 16Gtex/clk
 
Because, unlike you, we don't declare what rumors we like to be correct and what rumors we don't like to be bunk, and then like to discuss what makes sense and what doesn't.

Such as an actively cooled handheld mode in lieu of what the Pixel C can accomplish at passive with a stock X1.

Based on the rumors, we are almost certainly getting 512 GFLOPS docked and based on rumors we are almost certainly not getting 1tflop !!!!! lol.

If people don't want to read or discuss rumors then they should ignore the thread.
 
The delay was likely a software issue. Changing the architecture late-ish in the game is certainly possible, but unlikely.



It's because I've made a lot of enemies. I expect certain people to hate on anythjing I say no matter what. It doesn't bother me, I know it's my fault.
Enemies is a strong word. We are just arguing about video games nobody hates you haha.
 
Can you give a source for that rumor, because from what I remember it was 100% a theory presented here by Thraktor, rather than a rumor based on any sort of inside information.

You know, I swear it was read from semiaccurate, and this is now the second time ive failed to find the source in 2 days, so ill just stop mentioning it. Maybe it wasnt a real article. I swore it was.

In that case, ignore what I said. I had no intention of spreading misinformation myself. I thought the 20nm thing was well recorded, but I guess not :(
 
Final devkits going out last month would say its not a software issue only. Otherwise, they'd have gone out much sooner so that more software could be finalized sooner rather than later.



FLOPS/cycle, the unit representation itself, is weird. It'd just be FLOPS.

Probably meant FLOPs/cycle
 
If the resolution changing is the only driver here (it likely isn't) then you'd need a clock change of ~2.25x right? Which would be about 300Gflops to 700Gflops.

720p > 1080p is more than twice as many pixels, you need pretty much twice as much power to increase the resolution to 1080p, assuming of course you want native 1080p for docked.

Mh, didn't thought it would be that much. You learn something new everyday.
300 portable to 700-750 docked, is it.
 
I feel like a broken record here, but they certainly didn't have to change the architecture to take advantage of the increased efficiency of a 16nm process. The availability of the 16nm process could be part of the reason for the delay, there is really no way to know at this point.

Perhaps, but it's possible that they were locked-in either way. We don't know how these contracts work, after all.

Another thing: we don't know if Pascal's architectural tweaks are the reason it clocks so high.
 
Perhaps, but it's possible that they were locked-in either way. We don't know how these contracts work, after all.

Another thing: we don't know if Pascal's architectural tweaks are the reason it clocks so high.

Isn't it typically the die shrink which gives the chip its increased power efficiency? I suppose you're right that we don't know about the max clock rate of a 16nm Maxwell chip, but I would assume the reason it can clock higher with Pascal is the increased power efficiency. That is just an assumption though, you are correct.

But at this point there's really no way for us to know if it's on 20nm or 16nm- we just know that 16nm would make more sense due to power/battery concerns and likely price too, but we don't know any of the conditions regarding the contract or the production timeline.

they are talking about 16FP

The article says nothing about FP16 or FP32, and they compare that number of 1TF to the 6TF of Scorpio. The point is, the author of that article (assuming he saw the number for FP16) clearly doesn't have enough technical knowledge to draw conclusions based on microarchitecture.
 
Final devkits going out last month would say its not a software issue only. Otherwise, they'd have gone out much sooner so that more software could be finalized sooner rather than later.



FLOPS/cycle, the unit representation itself, is weird. It'd just be FLOPS.

Not necessarily. Having final kits out isn't really going to be important for finalizing software unless the final kit is significantly different. Otherwise, the last few months will be largely spent on optimization and such. If Nintendo knew long in advance that they wouldn't meet the release target, it makes sense that they'd spend a little more time tweaking the hardware. You can argue that it would give time to switch to Pascal, but we don't know how the contracts work, what they're locked into, or if going that route would have delayed software.

Isn't it typically the die shrink which gives the chip its increased power efficiency? I suppose you're right that we don't know about the max clock rate of a 16nm Maxwell chip, but I would assume the reason it can clock higher with Pascal is the increased power efficiency. That is just an assumption though, you are correct.

But at this point there's really no way for us to know if it's on 20nm or 16nm- we just know that 16nm would make more sense due to power/battery concerns and likely price too, but we don't know any of the conditions regarding the contract or the production timeline.

Power efficiency and how high the clocks an architecture can reach are two very different things. Otherwise, we would have seen much higher clocks out of AMD's Polaris cards than we actually did.
 
I would be dissapointed if Nintendo went 20nm process, not much for horse power implicancies but seeing that will be stuck with an old process that likely will increase the production cost just like Wii U.
 
Status
Not open for further replies.
Top Bottom