Reading up on Huawei's new Kirin 960, I'm starting to hope that Nintendo & Nvidia have managed to squeeze in the new A73 core (and TSMC's new 16FFC process) for Switch's SoC. Performance per clock isn't that much higher than the A72 (perhaps ~15% according to early Geekbench results), but the A73's improved power efficiency combined with 16FFC's reduced power draw could actually give them a quite sizeable performance boost over A72/16FF+ at the power draw Nintendo's likely to be targeting.
I've put together a graph of estimated power curves to illustrate the improvement. The A72 and A53 curves on 16FF+ are from Anandtech's Kirin 950 analysis, so should be pretty accurate. The dashed lines are my estimates based on simple extrapolation from those results. I've used ARM's claims of about a 25% reduction in power draw from A72 to A73, but for the 16FFC savings it's a little trickier, as I haven't been able to find any decent figures for what to expect for this kind of a chip. TSMC themselves have claimed as much as 50% power savings for 16FFC over 16FF+, but this is likely for ultra-low power IoT chips, which can take advantage of 16FFC's lower minimum voltage. I've ended up going with a 20% power saving, as it lines up with Huawei's claim that Kirin 960 is 40% more power efficient than it's predecessor.
As you can see, the total benefit both from the move to 16FFC and the jump to the A73 on top of that is quite substantial. To take a single data point, with a 1W power draw, you could either get a cluster of A72s on 16FF+ at 1.2GHz, or a cluster of A73s on 16FFC at 1.6GHz. Taking A73's improved performance per clock into account, you could be looking at as much as a 50% improvement in performance at the same power draw.
The A53s on a big.little config would also get a boost from the move to 16FFC. With a 1.5W power draw on 16FF+, you could get 4x A72 @ 1.35GHz + 4x A53 @ 1.3GHz, whereas on 16FFC with A73s you could manage 4x A73 @ 1.75GHz + 4x A53 @ 1.5GHz. Even at a much lower power limit, you could still get pretty decent performance, for example at 750mW a 2x A73 @ 1.6GHz + 4x A53 @ 1.35GHz would outperform PS4's Jaguar at most single-threaded tasks, and would provide surprisingly respectable performance in heavily multi-threaded scenarios.
It's also worth noting that what would keep Nintendo from adopting either of these new technologies isn't cost. In fact, both 16FFC and A73 are substantially cheaper than what they replace (16FFC is 10-20% cheaper per die than 16FF+ and A73 is ~25% smaller [ie cheaper] than A72). The question here is really the timescale. The first chips using 16FFC and containing A73 cores are only just starting to trickle out now, 4-5 months before Switch's launch. Whether Nintendo would have chosen either would depend on (a) what the console's release target was during SoC design and (b) how confident they were at either of both of these being ready for that target.
On part (a), it does seem likely that Nintendo's internal target for Switch's release was around now, and that was probably only changed early this year. From that point of view, A73 cores would seem a tight squeeze, and Nintendo may not even have had the option of using them during initial design, and may not have been willing to delay the tape-out by swapping them in once they became available. I don't think 16FFC would have been too tight a squeeze for a November 2016 launch, though. Nintendo would have known about TSMC's 16FFC plans from the beginning, and Nvidia have a very good relationship with TSMC, so they would have known exactly where yields were and how likely it was to be feasible for a late 2016 launch. It's also something Nintendo may have been more willing to risk, given the substantial cost and power savings for the entire SoC from moving to 16FFC. If Switch was always targeted for early 2017, though, or if the delay was in part to allow them to move to 16FFC and/or A73 cores, then all bets are off.
At this point I think 16FFC seems reasonably likely, but A73s may be a bit much to ask. Nonetheless it's interesting to consider the options open to Nintendo on the CPU front, given the need to keep performance high and power consumption low to make the system in any way competitive in handheld mode.
I've put together a graph of estimated power curves to illustrate the improvement. The A72 and A53 curves on 16FF+ are from Anandtech's Kirin 950 analysis, so should be pretty accurate. The dashed lines are my estimates based on simple extrapolation from those results. I've used ARM's claims of about a 25% reduction in power draw from A72 to A73, but for the 16FFC savings it's a little trickier, as I haven't been able to find any decent figures for what to expect for this kind of a chip. TSMC themselves have claimed as much as 50% power savings for 16FFC over 16FF+, but this is likely for ultra-low power IoT chips, which can take advantage of 16FFC's lower minimum voltage. I've ended up going with a 20% power saving, as it lines up with Huawei's claim that Kirin 960 is 40% more power efficient than it's predecessor.

As you can see, the total benefit both from the move to 16FFC and the jump to the A73 on top of that is quite substantial. To take a single data point, with a 1W power draw, you could either get a cluster of A72s on 16FF+ at 1.2GHz, or a cluster of A73s on 16FFC at 1.6GHz. Taking A73's improved performance per clock into account, you could be looking at as much as a 50% improvement in performance at the same power draw.
The A53s on a big.little config would also get a boost from the move to 16FFC. With a 1.5W power draw on 16FF+, you could get 4x A72 @ 1.35GHz + 4x A53 @ 1.3GHz, whereas on 16FFC with A73s you could manage 4x A73 @ 1.75GHz + 4x A53 @ 1.5GHz. Even at a much lower power limit, you could still get pretty decent performance, for example at 750mW a 2x A73 @ 1.6GHz + 4x A53 @ 1.35GHz would outperform PS4's Jaguar at most single-threaded tasks, and would provide surprisingly respectable performance in heavily multi-threaded scenarios.
It's also worth noting that what would keep Nintendo from adopting either of these new technologies isn't cost. In fact, both 16FFC and A73 are substantially cheaper than what they replace (16FFC is 10-20% cheaper per die than 16FF+ and A73 is ~25% smaller [ie cheaper] than A72). The question here is really the timescale. The first chips using 16FFC and containing A73 cores are only just starting to trickle out now, 4-5 months before Switch's launch. Whether Nintendo would have chosen either would depend on (a) what the console's release target was during SoC design and (b) how confident they were at either of both of these being ready for that target.
On part (a), it does seem likely that Nintendo's internal target for Switch's release was around now, and that was probably only changed early this year. From that point of view, A73 cores would seem a tight squeeze, and Nintendo may not even have had the option of using them during initial design, and may not have been willing to delay the tape-out by swapping them in once they became available. I don't think 16FFC would have been too tight a squeeze for a November 2016 launch, though. Nintendo would have known about TSMC's 16FFC plans from the beginning, and Nvidia have a very good relationship with TSMC, so they would have known exactly where yields were and how likely it was to be feasible for a late 2016 launch. It's also something Nintendo may have been more willing to risk, given the substantial cost and power savings for the entire SoC from moving to 16FFC. If Switch was always targeted for early 2017, though, or if the delay was in part to allow them to move to 16FFC and/or A73 cores, then all bets are off.
At this point I think 16FFC seems reasonably likely, but A73s may be a bit much to ask. Nonetheless it's interesting to consider the options open to Nintendo on the CPU front, given the need to keep performance high and power consumption low to make the system in any way competitive in handheld mode.