Nintendo Switch Dev Kit Stats Leaked? Cortex A57, 4GB RAM, 32GB Storage, Multi-Touch.

Status
Not open for further replies.
Keeping the Wii U version would make no sense financially and I would even call it an insane move.

1. By the time Bloodstained comes out, the Switch would be out for a year and the Wii U dead for 2.
2. The Switch supports the UE4 out of the box. The Wii U does not.
3. Armature would be able to focus on the Vita version.
 
Keeping the Wii U version would make no sense financially and I would even call it an insane move.

1. By the time Bloodstained comes out, the Switch would be out for a year and the Wii U dead for 2.
2. The Switch supports the UE4 out of the box. The Wii U does not.
3. Armature would be able to focus on the Vita version.

But Vita has been dead for 2 years already, that's clearly not something they consider when porting the game.

lol, couldn't resist
 
If the Switch actually ends up using the Xavier SoC, will it actually be as powerful as the XB1 in terms of teraflop and general computation performance? I'd imagine factors such as memory bandwidth would play into its overall performance too, as well as power consumption (although if the dock mode gives it that extra boost, maybe that wont be an issue)
 
Yes, Xavier would be the most likely suspect. It is worth noting that the article claimed the chip was "in production", and Xavier isn't, although it is possible this was mis-interpreted or mis-transcribed by the author. It would be an unusual option for Xavier, though, in that GDDR5(X) could give them the same bandwidth and substantially more capacity for a lower cost, and I wouldn't expect them to be so heavily power-constrained to choose HBM2 on the basis of power savings. The target render Nvidia showed of the Xavier board also showed what clearly looked like RAM modules sitting next to the SoC (although of course one can read too much into early target renders like this).

Alternatively it is technically possible that it's for a GPU, but given the capacity and bandwidth it would only be suitable for an entry-level card, and Nvidia would have to tape out a completely different die (probably something equivalent to a GP107) with a HBM interface just to be able to use it for a single slightly more power efficient laptop GPU. This would be an extremely unusual move for Nvidia, as they typically have a variety of SKUs per die across both laptop and desktop in order to take advantage of binning (the most power efficient dies going to laptops) and to reduce design costs and inventory risk. They wouldn't be able to do this here (no point selling it for desktops if it's just a more expensive version of the 1050Ti), and in any case it's unlikely that they'd be able to warrant the increased cost of a HBM-powered card versus its GDDR5 equivalent, as the power savings would be relatively small (especially as you're comparing to binned GP107's in laptops).

The last option (unless there's any other super-secret SoC due soon) is the Nintendo Switch. We know it's in production, we know Nintendo likes esoteric memory, and we have it on good authority that Switch will have 4GB of RAM. As you say, though, it would be overkill for a device of that performance level (you'd potentially be looking at equivalent bandwidth to PS4 Pro, despite being at best 1/5th as powerful and using a considerably more bandwidth efficient architecture), and it would almost certainly be very expensive for a device which we've been told is targeting a "surprisingly low" price point.

Effectively it has to be one of the above three. Xavier seems the least unlikely (although on the surface it wouldn't have been something I would have predicted), and I'd actually argue that a GPU with 4GB of HBM2 is more unlikely than Switch using it. For Switch, at least, it would appear to be the only feasible way of hitting 200GB/s+ of bandwidth in a portable form factor, if they were for some reason to decide that's what they want.

I read in the Nvidia blog that Xavier is targeting 20W so maybe it's using HBM2 to reduce power consumption.

Other than that, I just don't know if HBM2 would be cheaper for Nintendo than to just use a 128-bit bus for LPDDR4 RAM and more Cache. I can't really see HBM2 being cheaper than that and also the Switch is huge so it has space for another RAM chip.

Anyway, I don't really see Nintendo at this time choosing HBM2 when there are cheaper options, they'll likely use HBM in the future but not with the current Tegra in use.

If the Switch actually ends up using the Xavier SoC, will it actually be as powerful as the XB1 in terms of teraflop and general computation performance? I'd imagine factors such as memory bandwidth would play into its overall performance too, as well as power consumption (although if the dock mode gives it that extra boost, maybe that wont be an issue)

It won't be using Xavier. Nvidia won't have any samples to show to clients related to automotive until first quarter of 2017.

https://blogs.nvidia.com/blog/2016/09/28/xavier/
 
If the Switch actually ends up using the Xavier SoC, will it actually be as powerful as the XB1 in terms of teraflop and general computation performance? I'd imagine factors such as memory bandwidth would play into its overall performance too, as well as power consumption (although if the dock mode gives it that extra boost, maybe that wont be an issue)

SoC? System of Computation?
 
I read in the Nvidia blog that Xavier is targeting 20W so maybe it's using HBM2 to reduce power consumption.

Other than that, I just don't know if HBM2 would be cheaper for Nintendo than to just use a 128-bit bus for LPDDR4 RAM and more Cache. I can't really see HBM2 being cheaper than that and also the Switch is huge so it has space for another RAM chip.

Anyway, I don't really see Nintendo at this time choosing HBM2 when there are cheaper options, they'll likely use HBM in the future but not with the current Tegra in use.



It won't be using Xavier. Nvidia won't have any samples to show to clients related to automotive until first quarter of 2017.

https://blogs.nvidia.com/blog/2016/09/28/xavier/

Power consumption is also higher with HBM vs lpddr4. But the performance gains would be worth it.
 
I honestly don't know. Part of the issue is that I can't actually find official confirmation of the GPU L2 cache size (which is the important one in this case) for TX1 or Parker. I seem to recall that TX1 had a 512KB GPU L2 cache, but that isn't something I can find any hard data on. My gut says that an L3 victim cache of somewhere between 2MB - 4MB would probably do the job (Apple uses a 4MB L3 victim cache for the same purpose in its SoCs), but it's not something you can really tell without testing, and without a TX1 or Parker customised with a large L3 victim cache that can be fractionally disabled it's not something we can test.

(For what it's worth, the CPU L1 and L2 caches on Parker aren't both 2MB. The L1 depends on the cores, with the Denver cores each getting 128K I-cache and 64K D-cache, and the A57s getting 48K I/32K D each. There's then a 2MB L2 shared between the Denvers and a separate 2MB L2 shared among the A57

L2 on the Maxwell portion of the TX1 is 256KB according to the TX1 whitepaper. Perhaps 8MB of shared L3 Cache between both CPU and GPU?

Edit: Here's the Whitepaper. Check Page 13. PDF Warning.

http://international.download.nvidia.com/pdf/tegra/Tegra-X1-whitepaper-v1.0.pdf
 
I do hope the "surprising low" or "cheaper than expected" rumor pans out. I remember people predicting $300+ for the Vita due to its modern (at the time) hardware. Most seem to be expecting $299 so maybe $250.
Keeping the Wii U version would make no sense financially and I would even call it an insane move.

1. By the time Bloodstained comes out, the Switch would be out for a year and the Wii U dead for 2.
2. The Switch supports the UE4 out of the box. The Wii U does not.
3. Armature would be able to focus on the Vita version.
Yeah, wouldn't make sense and I imagine Wii U backers wouldn't mind Switching to another platform.
Yooka Laylee makes a bit more sense since it should be out before Switch and the engine is fully supported.
I wonder if a Vita port would still make sense. Not sure how it's doing in Japan but sales have been dead everywhere else for awhile and I don't see many caring 2 years from now.
The Switch could serve as the portable version if fans wanted one or PS4 if they wanted on PlayStation
 
Power consumption is also higher with HBM vs lpddr4. But the performance gains would be worth it.

And yet I feel it doesn't work with the current rumoured design. Even if it had A72 for the CPU and all the Nvidia tech to work around memory bandwidth issues then, it feels like an HBM2 stack would be overcompensating.
 
And yet I feel it doesn't work with the current rumoured design. Even if it had A72 for the CPU and all the Nvidia tech to work around memory bandwidth issues then, it feels like an HBM2 stack would be overcompensating.
IF they went with HBM, it'd probably be HBM1 considering the 4GB rumor.

But yeah, if HBM uses for power it's unlikely.
 
So Xavier is only being used for Autonomous applications in cars? Is it a straight up successor to Parker and more powerful etc, or something of a side step?

From the blog.

Packed with 7 billion transistors, and manufactured using cutting-edge 16nm FinFET process technology, a single Xavier AI processor will be able to replace today’s DRIVE PX 2 configured with dual mobile SoCs and dual discrete GPUs — at a fraction of the power consumption.

It's a successor to Drive PX2, Parker is what is used inside the Drive PX2.

Edit: https://www.computerbase.de/2016-08/nvidia-tegra-parker-denver-2-arm-pascal-16-nm/

There are slides in that link showing how Parker is used in Drive PX2.
 
I honestly don't know. Part of the issue is that I can't actually find official confirmation of the GPU L2 cache size (which is the important one in this case) for TX1 or Parker. I seem to recall that TX1 had a 512KB GPU L2 cache, but that isn't something I can find any hard data on.

TX1 has 256KB of L2 cache, with some of it used by the tile buffer. Not sure what the buffer size is, maybe someone can test it.

Nintendo Switch Super, 4 GFLOPs in 2020 using 8 watts with 3hrs battery life. Believe.

Not even Nintendo are that bad. lol
 
OMG, now you guys are talking about Switch using Xavier? Xavier doesn't have to run on tiny batteries. It's gonna be sucking down juice more than 10W.

Manage your expectations people! You guys do this to yourselves EVERY. DAMN. TIME.
 
Wow its almost as if that's what we have been saying all along. 512-768gflops. You people come in acting like you are some voice of reason when you are just parroting what we have been saying all along. No one is expecting a 25w monster or anything of the sort.

No it's not, and its kind of funny that you answer me saying exactly what I'm warning against, you are expecting almost double the performance than the specs suggest and some people are even more confused with the "1TF half precision".

Forget about 768 gflops, and don't think Switch is going to be 512 as a handheld, reason tells us that is going to be a system with 300-350 gflops working at 720p with maybe the possibility of having some extra performance to enable higher resolution while docked, start from there and you'll save yourselves from a potential big disappointment.
 
Keeping the Wii U version would make no sense financially and I would even call it an insane move.

1. By the time Bloodstained comes out, the Switch would be out for a year and the Wii U dead for 2.
2. The Switch supports the UE4 out of the box. The Wii U does not.
3. Armature would be able to focus on the Vita version.

About that Vita version. I kind of doubt its happening.
 
On a related note, why the Wii U couldn't handle UE4? Phones with weaker GPUs have been able to use it, though I recall it being said that Wii U's CPU was surpassed for even portables a long time ago.

In a word, resources. Developing an engine isn't a one-time expenditure of resources, it requires long-term maintenance and support and that doesn't take into account keeping it current with other platforms.

Unreal Engine 4.14 is nearing release and the Wii U's sales wouldn't have warranted porting each version and update.
 
Forget about 768 gflops, and don't think Switch is going to be 512 as a handheld, reason tells us that is going to be a system with 300-350 gflops working at 720p with maybe the possibility of having some extra performance to enable higher resolution while docked, start from there and you'll save yourselves from a potential big disappointment.

I'm not sure why reason would dictate the Switch having the same level of power as a Google Pixel C in a newer chip design.

Ultimately, we've been given very, very little detail about this thing. Just enough so that people can read into every piece of speculation and rumor however they'd like. But that doesn't lead one to be more reasonable than another simply by virtue of conservatism.
 
And yet I feel it doesn't work with the current rumoured design. Even if it had A72 for the CPU and all the Nvidia tech to work around memory bandwidth issues then, it feels like an HBM2 stack would be overcompensating.

Then again feeding say 1 TFLOPS, not incredibly far from Xbox One's 1.3 TFLOPS, with not even a quarter of the bandwidth Xbox One's GPU has... feels constrained.

Edit: not saying it will be 1 TFLOPS.
 
Keeping the Wii U version would make no sense financially and I would even call it an insane move.

1. By the time Bloodstained comes out, the Switch would be out for a year and the Wii U dead for 2.
2. The Switch supports the UE4 out of the box. The Wii U does not.
3. Armature would be able to focus on the Vita version.
They should just kill the Vita version too.

There is literally no way it comes out halfway decent and what's left of Vita's active userbase will have migrated to the other platforms in 1.5+ years.
 
I do hope the "surprising low" or "cheaper than expected" rumor pans out. I remember people predicting $300+ for the Vita due to its modern (at the time) hardware. Most seem to be expecting $299 so maybe $250.

Yeah, wouldn't make sense and I imagine Wii U backers wouldn't mind Switching to another platform.
Yooka Laylee makes a bit more sense since it should be out before Switch and the engine is fully supported.
I wonder if a Vita port would still make sense. Not sure how it's doing in Japan but sales have been dead everywhere else for awhile and I don't see many caring 2 years from now.
The Switch could serve as the portable version if fans wanted one or PS4 if they wanted on PlayStation

The vita hardware cost is offset by its expensive proprietary memory card. It will have cost more if it uses standard sd card.
 
Then again feeding say 1 TFLOPS, not incredibly far from Xbox One's 1.3 TFLOPS, with not even a quarter of the bandwidth Xbox One's GPU has... feels constrained.

Edit: not saying it will be 1 TFLOPS.

The most logical thing, would be considering the 6 month old rumor that the GPU will be a pascal based successor to the gpu inside the devkit (TX1), meaning on 16nm and turning 512GF into 750GF, and in some circumstances due to fp16, that 1TF can be reached. Considering Parker, Nvidea's successor to TX1 being 128 bit, is also a 16nm chip, this would be the most logical and presumably cheap (i guess?) option to get better performance, more bandwidth and scalability (battery life) between docked/portable?
 
Then again feeding say 1 TFLOPS, not incredibly far from Xbox One's 1.3 TFLOPS, with not even a quarter of the bandwidth Xbox One's GPU has... feels constrained.

Edit: not saying it will be 1 TFLOPS.

Thraktor, already explained months ago in the NX speculation threads how because of Nvidia's Tile-based Rasterizer tech, that if Nintendo were to use embedded RAM, they'd only need 4MB of it compared to 32MB from Wii U.

Since there's no embedded DRAM at 16nm (unless it was Intel then), they'd have to use eSRAM like the Xbox One. However that takes up more die space and is more costly.

That is why after Thraktor saw the 4GB of RAM in the final retail unit from Emily Rogers, he speculated for the bandwidth to be 50GB/s and to increase the memory of the L2 or L3 Cache.

The Wii U didn't have much memory bandwidth, it had 2GB DDR3 RAM of 12.8GB/s BW. It's been suggested that the BW of the 32MB eDRAM was only 70GB/s because the bus wasn't that wide.

When you see the comparison, LPDDR4 RAM doesn't look to be that bad. Especially that Nvidia have different tools like the above to support games compared to AMD.

HBM2 would give wasted performance assuming the costs difference is large compared to the increased Bandwidth setup for LPDDR4 and increasing the memory of Cache.

With what the Switch is currently, Nintendo are better off waiting for the next iteration of HBM when they make a successor/new iteration of the Switch that would be more powerful.

Reasons below:

http://arstechnica.com/gadgets/2016/08/hbm3-details-price-bandwidth/

Despite first- and second-generation High Bandwidth Memory having made few appearances in shipping products, Samsung and Hynix are already working on a followup: HBM3. Teased at the Hot Chips symposium in Cupertino, Calfornia, HBM3 will offer improved density, bandwidth, and power efficiency. Perhaps most importantly though, given the high cost of HBM1 and HBM2, HBM3 will be cheaper to produce.

9bb6b7df-6be9-45fe-a2d1-6e4580a309f1.jpg


HBM3 will feature a lower core voltage than the 1.2V of HBM2, as well as more than two times the peak bandwidth: HBM2 offers 256GB/s of bandwidth per layer of DRAM, while HBM3 doubles that to 512GB/s. The total amount of memory bandwidth available could well be terabytes per second.

Both Hynix and Samsung are working on HBM3 (Samsung is calling its version "Extreme HBM"), although neither has committed to a firm release date.

53536_02_low-cost-hbm-way-hit-mass-market-soon_full-1280x721.png


Meanwhile, Samsung has been working on making HBM cheaper by removing the buffer die, and reducing the number of TSVs and interposers. While these changes will have an impact on the overall bandwidth, Samsung has increased the individual pin speed from 2Gbps to 3Gbps, offsetting the reductions somewhat. HBM2 offers around 256GB/s bandwidth, while low cost HBM will feature approximately 200GB/s of bandwidth. Pricing is expected to be far less than that of HBM2, with Samsung targeting mass market products.

More at the link.
 
OMG, now you guys are talking about Switch using Xavier? Xavier doesn't have to run on tiny batteries. It's gonna be sucking down juice more than 10W.

Manage your expectations people! You guys do this to yourselves EVERY. DAMN. TIME.

So what you are saying is that we should expect two Xaviers in the Switch?
 
OMG, now you guys are talking about Switch using Xavier? Xavier doesn't have to run on tiny batteries. It's gonna be sucking down juice more than 10W.

Manage your expectations people! You guys do this to yourselves EVERY. DAMN. TIME.

By Xavier, i assume people mean Volta. Your argument about power consumption makes little sense, since every successor is more power efficient than it's predecessor. So will Volta, meaning it could be clocked lower to get the same performance as Pasal or Maxwell, and boost battery life.

However, i will cut off my left nut if Volta turns up in the Switch.
 
By Xavier, i assume people mean Volta. Your argument about power consumption makes little sense, since every successor is more power efficient than it's predecessor. So will Volta, meaning it could be clocked lower to get the same performance as Pasal or Maxwell, and boost battery life.

However, i will cut off my left nut if Volta turns up in the Switch.
Screenshotted.
 
False equivalence. PCs need RAM for a variety of things alongside games, so game requirements require a lot of leeway, whereas consoles are designed to allow games to use the full amount of a strict memory pool.

It used to be that way. Now Console OSes are hugging as much memory as a pc OS. I think the PS4 OS uses something like 100x more ram than the ps3 OS.

To add to this, most PC GPUs have dedicated vram in addition to the main ram. Consoles only have one pool.
 
It used to be that way. Now Console OSes are hugging as much memory as a pc OS. I think the PS4 OS uses something like 100x more ram than the ps3 OS.

To add to this, most PC GPUs have dedicated vram in addition to the main ram. Consoles only have one pool.

Well, that's because the PS4 is a multimedia system that for some reason wants to be able to run a game, Netflix and Spotify (for the sake of example) at the same fucking time, along with the whole share button system. Nintendo doesn't give a shit about that, though, so the OS is likely going to be a lot more lean, probably limited to the home menu, share button, and browser.
 
Well, that's because the PS4 is a multimedia system that for some reason wants to be able to run a game, Netflix and Spotify (for the sake of example) at the same fucking time, along with the whole share button system. Nintendo doesn't give a shit about that, though, so the OS is likely going to be a lot more lean, probably limited to the home menu, share button, and browser.

The issue is that the consoles have at least 5 GB of RAM available in their memory pool and if these rumors are true the Switch s going to have between 3 and 3.5 GB available depending on how much RAM the OS takes up. I imagine that a lot of major third party games may not be designed to work with that little RAM. Who knows how easy or how willing devs will be to port their games to the Switch.
 
All that wouldn't really affect my argument. To look at it from another angle, the internal space of an iPad Air 2 is almost entirely taken by it's batteries, and they provide ~27 watt-hours of energy. The batteries of an Nvidia Shield Tablet only provide ~19 watt-hours. The Switch certainly doesn't look like it has enough room for larger batteries than that.

Well the Ipad Air 2 has a much bigger battery than a Shield tablet as you say, yet the Shield tablet is actually the larger device of the two internally, so its hard to estimate battery purely on size. Looking at Switch it could easily have a similar internal area to those two, considering it looks far thicker. Really hard to say what battery it can use, but I'm expecting it to consume around 8w max in mobile mode.

Targeting a battery life of 3-4 hours with non-stop gaming, everybody can roughly estimate how much power the Switch's display, GPU, CPU, and all the rest are allowed to consume in total. Possible clock speeds then certainly become an interesting point of discussion.

Looks like battery life is struggling to hit 3 hours according to reports/rumours. Also the system could easily drop OS features in mobile mode. Allowing a CPU core to be disabled or clocked right down. As well as GPU, memory clocking down as well given the lower target resolution.

I mean I'm not even assuming the CPU setup at the start of this thread is going to be the final setup, but I certainly expect that kind of performance or more (4x A57 @ 2Ghz) to be their. It has to be to be up their with PS4/XBox One CPU's.
 
Well, that's because the PS4 is a multimedia system that for some reason wants to be able to run a game, Netflix and Spotify (for the sake of example) at the same fucking time, along with the whole share button system. Nintendo doesn't give a shit about that, though, so the OS is likely going to be a lot more lean, probably limited to the home menu, share button, and browser.

Considering their last 2 consoles supported the likes of Netflix I'm not convinced it'll be quite as lean as you're imagining.
 
And yet I feel it doesn't work with the current rumoured design. Even if it had A72 for the CPU and all the Nvidia tech to work around memory bandwidth issues then, it feels like an HBM2 stack would be overcompensating.

Its worth noting that nintendo want these NS devices to he ittrrative consoles so HBM2 would somewhat futureproof the device and probably help with compadability with the Switch 2
 
I could totally see the thing end up having 64bit bus/25GBps if they reduced the GPU core count to say 128. That should be a nice match for the GPU and not be a such obvious bottleneck. They also get an added benefit of increasing battery life quite a bit during gaming.

256 GFLOPS with Maxwell featureset is still a pretty decent upgrade from Wii U.
 
Considering their last 2 consoles supported the likes of Netflix I'm not convinced it'll be quite as lean as you're imagining.

I doubt it would be like PS4 though, where Netflix could be running in the background of a game or vice versa. You don't need much OS RAM to run Netflix when there is no game being suspended at the time- you have access to all of the system RAM then.
 
I could totally see the thing end up having 64bit bus/25GBps if they reduced the GPU core count to say 128. That should be a nice match for the GPU and not be a such obvious bottleneck. They also get an added benefit of increasing battery life quite a bit during gaming.

256 GFLOPS with Maxwell featureset is still a pretty decent upgrade from Wii U.

The X1 is 512Gflop with the same 25GB/s bandwidth. So obviously Nvidia feel the Maxwell GPU with 256 cores is a ok match for that bandwidth. Plus if they did drop processing power they'd go for a lower clock rather than cutting cores.
 
No it's not, and its kind of funny that you answer me saying exactly what I'm warning against, you are expecting almost double the performance than the specs suggest and some people are even more confused with the "1TF half precision".

Forget about 768 gflops, and don't think Switch is going to be 512 as a handheld, reason tells us that is going to be a system with 300-350 gflops working at 720p with maybe the possibility of having some extra performance to enable higher resolution while docked, start from there and you'll save yourselves from a potential big disappointment.
Why do people feel the need to always try to manage other people's expectations? Most are adults here and they can manage their own stuff?
 
Considering their last 2 consoles supported the likes of Netflix I'm not convinced it'll be quite as lean as you're imagining.

Netflix can use 3.2GB of memory i imagine... because it will be an "app" that will not run WHILE you are running a game. So to the OS, Netflix is just "like" a game. And if Netflix can run on a Wii, why would it be such a problem?
 
Why do people feel the need to always try to manage other people's expectations? Most are adults here and they can manage their own stuff?

I find it reasonable to warn people against going the wust way and avoid them future meltdowns, can't see the harm in it.

I could totally see the thing end up having 64bit bus/25GBps if they reduced the GPU core count to say 128. That should be a nice match for the GPU and not be a such obvious bottleneck. They also get an added benefit of increasing battery life quite a bit during gaming.

256 GFLOPS with Maxwell featureset is still a pretty decent upgrade from Wii U.

That would be quite counter-productive, halving the core count to balance bandwidth would do much more harm than good to the system.
 
Netflix can use 3.2GB of memory i imagine... because it will be an "app" that will not run WHILE you are running a game. So to the OS, Netflix is just "like" a game. And if Netflix can run on a Wii, why would it be such a problem?

The full Netflix Client runs great on Wii U, which had much less RAM than what we're talking about here for applications. It also runs on all manner of iPhones and iPads that max out at 4GB of Ram for the highest end iPad Pros.
 
I could totally see the thing end up having 64bit bus/25GBps if they reduced the GPU core count to say 128. That should be a nice match for the GPU and not be a such obvious bottleneck. They also get an added benefit of increasing battery life quite a bit during gaming.
I could totally see you totally seeing that.
 
I could totally see the thing end up having 64bit bus/25GBps if they reduced the GPU core count to say 128. That should be a nice match for the GPU and not be a such obvious bottleneck. They also get an added benefit of increasing battery life quite a bit during gaming.

256 GFLOPS with Maxwell featureset is still a pretty decent upgrade from Wii U.

I agree with what you're trying to say, Nintendo going more conservative than people expect, just like they did with Wii and WiiU. It's still a possibility, but it would be strange when they want to stress this is not a "handheld" as such, but more a homeconsole or hyrbid. To use a chipset that is intended for mobile, and gimp it, while it's main feature is to also use it as a home console. If they were just making a new handheld alongside a homeconsole, then yes. Certainly. However, in this case i would think not... but it is always possible. Just like they gave as the Wii, and just like people thought that they wouldn't do it again with WiiU. How many times people thought "Nintendo would have to try hard to release something in 2012 that is only as powerful as a 360" or they "would need to go out of their way" etc... So I still consider it, but i hope and expect they won't. It also just makes the most sense. The Pascal successor of a 500GF chip, that can save power at the same performance (due to smaller fab node) while portable, and boost performance while docked.
 
Wow its almost as if that's what we have been saying all along. 512-768gflops. You people come in acting like you are some voice of reason when you are just parroting what we have been saying all along. No one is expecting a 25w monster or anything of the sort.
Yeah i wonder why some people would be ok with the latter and not with the former, it's just 50% more performance. Who ever asked for PS4 Pro to be as powerful as Scorpio, or the Xbone to be on par with PS4, right?

No it's not, and its kind of funny that you answer me saying exactly what I'm warning against, you are expecting almost double the performance than the specs suggest and some people are even more confused with the "1TF half precision".

Forget about 768 gflops, and don't think Switch is going to be 512 as a handheld, reason tells us that is going to be a system with 300-350 gflops working at 720p with maybe the possibility of having some extra performance to enable higher resolution while docked, start from there and you'll save yourselves from a potential big disappointment.
Correct. There's no way in hell that this thing is clocked at 1GHZ when used standalone. 512 Gflops is the max we can expect when docked, and around 350 when used on the go. 2x Wii U for 720p and 3x for 900p-1080p gaming on the TV. Basically the Switch is going to be weaker than what people thought the Wii U would've been (600-800gflops) 5 years ago in the WUSTs, and perfectly in line with the two latest Nintendo consoles. "Industry leading chips" and "new Gamecube" lol

And please don't even start with Volta.

Are we basically expecting a custom version of Maxwell which will use Pascal and be smaller in size? Slightly more power than standard maxwell and a little more battery efficient?
It's the TX1 with a die shrink and a few minor tweaks. Pascal is pretty much a 16nm version of Maxwell with a few improvements.
 
Status
Not open for further replies.
Top Bottom