37 Minute Direct Feed Cyberpunk 2077 footage from Switch 2

😬.

It's going to be interesting how these third-party games will perform on an ecosystem that has become irrelevant in that sphere.
Having a bigger user base to sell is now a bad thing?
DywJC4i.gif
 
How can you explain flops that release everywhere then?.

what i am saying is that SW2 will be a new console, i wonder if third party games will see sucesss when Nintendo's consoles have lost that aspect of their identity
That's game fault… Still, bigger user base = more sales, by simple logic.
 
For the thousands of Switch owners that never bought/played it on Sony or MS' platforms?

Not every gamer owns every system or has played every game available on every system.

Also for those that want a superior portable version of Cyberpunk and doesn't want to pay $500+ for a system that they have to then tinker with settings to get a halfway decent version of the game.
Thousands? Based on console sales there's easily at least ten million switch owners who don't own a PS/MS console.
 
What you think are the real TF on Switch 2? Just for curiosity 🤔

3.1 is the real TF number. the issue is that it's hard to compare that to GCN or RDNA2 TF numbers.

neither RDNA2 nor GCN use dual issue FP32, Ampere does.
and there's a reason Sony didn't even try saying the PS5 Pro GPU is 32 TFLOPS, as they knew it would set the wrong expectations.
they could have btw., and it wouldn't have been a lie... the PS5 Pro is a 32 TFLOPS machine... but it's a 32 TFLOPS RDNA3 machine, and RDNA3 uses dual issue FP32, and so this 32 TFLOPS RDNA3 GPU is only 45% faster than the 10.28 TFLOPS RDNA2 GPU of the base PS5.

you can't even directly compare a 3.1 TFLOPS Nvidia GPU to a 3.1 TFLOP RDNA3 GPU, even tho they both use dual issue FP32. so comparing an Nvidia dual issue FP32 GPU to an AMD GPU that doesn't use it is not gonna give you good estimates of how performance compares.


So TF number is fake?

no, but TF numbers have become very murky in the last couple of years.
the ROG Ally is an 8 TF handheld for example... and that's not a fake number either, but if you just look at that number and expect it to run games better than a Series S, and almost as well as a PS5, you will be disappointed
 
Last edited:
According to Richard, The Switch 1 received an increase in the GPU clock speed at some point. So, it's totally possible that could happen again. If the Max Speed information is there, is for a reason.
That was a different CPU profile only for loading screens iirc, but I'm open to be surprised, not like it will need it tho, what it would actually need is a TV only version with full GPU power unlocked and every set to max theoretical frequencies that doesn't potentially affect game logic like CPU, say memory bandwidth, etc... Make me happy Nintendo.
 
3.1 is the real TF number. the issue is that it's hard to compare that to GCN or RDNA2 TF numbers.

neither RDNA2 nor GCN use dual issue FP32, Ampere does.
and there's a reason Sony didn't even try saying the PS5 Pro GPU is 32 TFLOPS, as they knew it would set the wrong expectations.
they could have btw., and it wouldn't have been a lie... the PS5 Pro is a 32 TFLOPS machine... but it's a 32 TFLOPS RDNA3 machine, and RDNA3 uses dual issue FP32, and so this 32 TFLOPS RDNA3 GPU is only 45% faster than the 10.28 TFLOPS RDNA2 GPU of the base PS5.

you can't even directly compare a 3.1 TFLOPS Nvidia GPU to a 3.1 TFLOP RDNA3 GPU, even tho they both use dual issue FP32. so comparing an Nvidia dual issue FP32 GPU to an AMD GPU that doesn't use it is not gonna give you good estimates of how performance compares.




no, but TF numbers have become very murky in the last couple of years.
the ROG Ally is an 8 TF handheld for example... and that's not a fake number either, but if you just look at that number and expect it to run games better than a Series S, and almost as well as a PS5, you will be disappointed
But the TF formula used for this machine is basically the same used in AMD RDNA 2 used for consoles GPU iirc, this is not including the double issue as far as I can tell, and it is behaving exactly as one would assume compared to Xbox Series S
 
no, but TF numbers have become very murky in the last couple of years.
the ROG Ally is an 8 TF handheld for example... and that's not a fake number either, but if you just look at that number and expect it to run games better than a Series S, and almost as well as a PS5, you will be disappointed

Because ROG Ally, or any upcoming PC handheld in the near future are limited by....

Schitts Creek Comedy GIF by CBC


LPDDR5 & LPDDR5x

There's no architectures in the world right now by either Qualcomm, Intel, AMD or Nvidia that can feed big computational GPUs with that kind of bandwidth.

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d777a7f-64d1-4b45-9e8b-48487c8e6061_712x410.png


"In contrast, Ryzen Z1 Extreme's very large and fast iGPU outpaces advances in memory bandwidth"

And Z2 Extreme will be the same.

Since Valve said that there won't be a Steam deck 2 until a generational leap in compute, well they have to wait on mobile memory to catch up. Doesn't matter if you throw more CUs in the GPU if you can't feed them.
 
Last edited:
How can you explain flops that release everywhere then?.

what i am saying is that SW2 will be a new console, i wonder if third party games will see sucesss when Nintendo's consoles have lost that aspect of their identity

3rd parties did well with the Switch and with the Switch 2 being backwards compatible, you'll see 3rd parties still targeting that 150M+ userbase a la PS4 games still being released.
 
We've finally got early fps analysis of the released docked footage for cyberpunk.



I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I want to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.

Feels bad.gif
 
Last edited:
We've finally got early fps analysis of the released docked footage for cyberpunk.



I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I watch to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.

Feels bad.gif


whaaaattt?? same vid in op peoples says look goord 30 fps PERFORMENCE MODE 90 percents of time derstroy steemdeck




FsioefR.jpeg


cCa3tk4.jpeg
 
Last edited:
We've finally got early fps analysis of the released docked footage for cyberpunk.



I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I want to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.

Feels bad.gif

What?? 😂😂 The game runs at 30fps like above 95% of the time, especially in some heavy fighting scenes and is not even out yet. There's room for improvement.
 
What?? 😂😂 The game runs at 30fps like above 95% of the time, especially in some heavy fighting scenes and is not even out yet. There's room for improvement.
its all true i play old builds at experience event and waaaaaaaaaaaaay better now cdpr will patch for years likes witcher bookmarked
 
3.1 is the real TF number. the issue is that it's hard to compare that to GCN or RDNA2 TF numbers.

neither RDNA2 nor GCN use dual issue FP32, Ampere does.
and there's a reason Sony didn't even try saying the PS5 Pro GPU is 32 TFLOPS, as they knew it would set the wrong expectations.
they could have btw., and it wouldn't have been a lie... the PS5 Pro is a 32 TFLOPS machine... but it's a 32 TFLOPS RDNA3 machine, and RDNA3 uses dual issue FP32, and so this 32 TFLOPS RDNA3 GPU is only 45% faster than the 10.28 TFLOPS RDNA2 GPU of the base PS5.

you can't even directly compare a 3.1 TFLOPS Nvidia GPU to a 3.1 TFLOP RDNA3 GPU, even tho they both use dual issue FP32. so comparing an Nvidia dual issue FP32 GPU to an AMD GPU that doesn't use it is not gonna give you good estimates of how performance compares.




no, but TF numbers have become very murky in the last couple of years.
the ROG Ally is an 8 TF handheld for example... and that's not a fake number either, but if you just look at that number and expect it to run games better than a Series S, and almost as well as a PS5, you will be disappointed
So handheld is 1.7 and docked is 3.1 Tf. Got it.
 


Ew, GVG. I actually blocked YouTube from recommending their channel to me, because I thought the silly cancellation drama they tried to inflict on GameXplain during their departure over what basically amounted to wanting to shake Andre down for a bigger share of the channel revenue was profoundly scummy.

And it's a shame, because I liked him on Nintendo Life, but I can't stand Jon Cartwright anymore after seeing what a sanctimonious political douchebag he is on twitter.
 
What?? 😂😂 The game runs at 30fps like above 95% of the time, especially in some heavy fighting scenes and is not even out yet. There's room for improvement.
It doesn't at all. It frequently drops from 30 but to like 28. Then occasionally it drops to like 24-25. Then you have the rare drops which I highlighted.

The Mario kart bundle is 700 cad. Cyberpunk is an additional hundred. For an extra 200, I can get a rog ally x with and play at a higher resolution with frame rates greater than 60fps. The switch 2 version is just poor. It's just wild to me that Nintendo can release new handhelds that have half the power of handhelds released a year or two ago. Based on what we've seen, the ally x is 2x faster than the switch 2. It's actually wild.
 
Last edited:
So handheld is 1.7 and docked is 3.1 Tf. Got it.

yeah, now deduct roughly 30%~40% from that and you'll have roughly the comparison to RDNA2 GPUs... but even that's not an exact way to compare them.

going by PC performance, a 30 TF Ampere card is roughly on par with a 20 TF RDNA2 card
 
We've finally got early fps analysis of the released docked footage for cyberpunk.



I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I want to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.

Feels bad.gif




just going to copy this from another forum.
if you frame by frame the video during a few of the spots where its supposed to have dropped to 17fps, you get a complete, full frame every time you press frame advance during those drops, which is what you would expect if its running at 30fps with a 30fps youtube video, not if its running at 17 - 20. such as the one at this timestamp, and it doesn't look like it suddenly drops to half its intended framerate either, are we sure this is even working correctly? also weird the overlay is not on screen for some of the clips or the entire back half of the footage. I don't get it.

EDIT. yeah watching a DF framerate analysis, anytime there's a dropped frame, you have to frame advance twice to move past it and get another frame. That isnt happening in this footage AT ALL. example here, frame advance through one of those framerate drops and you'll see it takes more than one press per frame anytime it isnt running at 60.

It looks like his software is interpreting a large spike as a drop in framerate lasting multiple, even dozens of frames, when its only a single spike in the graph, and not repeated spikes like it should be if you're dropping frames, so this framerate counter is not accurate at all i think? as an example, there's a single large spike on the frametime graph, and the framerate counter interprets that one, single spike, as a ton of dropped frames in a row, when it can't be, evidenced by the fact that you still get a full frame every time you advance it. IDK, maybe i'm wrong, but it seems odd relative to framerate graph behavior from DF and other places.
 
But the TF formula used for this machine is basically the same used in AMD RDNA 2 used for consoles GPU iirc, this is not including the double issue as far as I can tell, and it is behaving exactly as one would assume compared to Xbox Series S

the reason is that Nvidia implemented it differently.
AMD simply added the possibility of running 2 instructions per shader, which is pretty much mostly useless in games, as it only applies to instances where you have to do the same instruction twice.

Nvidia actually doubled the amount of cuda cores, but if I understand it correctly only 2/3 of them can do floating point calculations while the rest can only do integer calculations (it gets a bit complicated as well as not all of the floating point shaders can be utilised when the integer ones are being used... and all in all I don't fully understand the pros and cons of this approach tbh)

so it's extremely hard to compare Ampere to RDNA3 and especially to RDNA2.

generally speaking Ampere TFLOPS are not quite as "inflated" as RDNA3 TFLOPS. on RDNA you can almost perfectly split RDNA3 TFLOPS in half and you get an estimate of an RDNA2 GPU's perofmance, aka. real world gaming performance.
On Ampere it's more like deducting 30%~40% to roughly be comparable to RDNA2. but that's very murky still...

so basically, they are very hard to compare 1 to 1 as they are very different architectures.
 
Last edited:
just going to copy this from another forum.
What's clear by watching the video is that the graph is not in sync with the video. I noticed that right away however just by watching the video alone, you can clearly see the drops. Almost every single cyberpunk video released so far has drops. It's so easy to see on oled it's not even funny.
 
3.1 is the real TF number. the issue is that it's hard to compare that to GCN or RDNA2 TF numbers.

neither RDNA2 nor GCN use dual issue FP32, Ampere does.
and there's a reason Sony didn't even try saying the PS5 Pro GPU is 32 TFLOPS, as they knew it would set the wrong expectations.
they could have btw., and it wouldn't have been a lie... the PS5 Pro is a 32 TFLOPS machine... but it's a 32 TFLOPS RDNA3 machine, and RDNA3 uses dual issue FP32, and so this 32 TFLOPS RDNA3 GPU is only 45% faster than the 10.28 TFLOPS RDNA2 GPU of the base PS5.

you can't even directly compare a 3.1 TFLOPS Nvidia GPU to a 3.1 TFLOP RDNA3 GPU, even tho they both use dual issue FP32. so comparing an Nvidia dual issue FP32 GPU to an AMD GPU that doesn't use it is not gonna give you good estimates of how performance compares.




no, but TF numbers have become very murky in the last couple of years.
the ROG Ally is an 8 TF handheld for example... and that's not a fake number either, but if you just look at that number and expect it to run games better than a Series S, and almost as well as a PS5, you will be disappointed

Also copied from another forum as don't have enough experience with this tech. It's not that simple from what i understand. Also it seems that DF is just wrong about a lot.
Because what I hear in that word salad from Alex is a description of Ampere architecture as "dual-issue" (it isn't), followed by a commonly repeated misconception about the architectural evolution between Pascal, Turing, and Ampere, followed by an explanation of how GPUs don't reach their theoretical numbers (no shit), conflating that with "flopflation" and then a "well I guess flops mean nothing and we shouldn't talk about them anymore".

This topic has been hashed over multiple times in this thread. Do we really want to reheat it yet again?

TLDR - according to the logic Alex used to justify that Switch 2 is really only 1.4tf/2.6tf, I could also argue that PS4 is only 1.2tf. So what?

Regarding the architectural confusion, check this out, maybe it will help:

GTX 1080: 20 SMs, 2560 cores, 8.873 TFLOPs
RTX 2080: 46 SMs, 2944 cores, 10.07 TFLOPs
RTX 3080: 68 SMs, 8704 cores, 29.77 TFLOPs

If you don't understand what happened, you probably look at these numbers and think that Turing was a disappointing side-grade that mostly just added RT and DLSS support, and then Ampere was a pretty good leap forward from Turing but it's definitely not 3x over Turing because the benchmarks don't bear that out, so that must mean that this whole split INT/FP thing means that Ampere never uses half it's cores, and Nvidia counts them anyway, and really, Ampere is only 4352 cores and 14.88 TFLOPs. But that is NOT what happened.

What actually happened is that Turing was a massive leap forward in density, but Nvidia chose to spend it by trying to gain FP efficiency while creating a card that was better at other tasks like ML, by splitting the cores into an INT stack and an FP stack. The INT stack doesn't do FP and therefore by definition does not count towards FLOPs. Games are mostly FP heavy, but INT still happens as some part of executing code, so you have an imperfect mix, but since the INTs are separate you get fantastic efficiency out of your FP unit. But half your silicon is barely working, what a waste! So Ampere fixed it - the INT stack can also do FP, so now all cores count towards FLOPs again, and if there is no INT work you get your full FLOPs (or close) for FP work. This sacrifices some of Turings fantastic FP efficiency, but gives you way more total FP capacity.

If Turing had stayed like Pascal, it would have counted as 5888 cores, and 20.14 TFLOPs, and then Ampere's performance would benchmark right in line with expectations. Instead, Turing didn't claim half it's performance win, so when Ampere came along, it claimed both Turing and Ampere's performance wins, and looked like way more of a leap than it actually was, causing people to not believe it, and assume Nvidia had pulled some kind of "flopflation" fast one. They didn't. The statement of "1.4 tf not 1.7 tf" is purely a statement of Ampere's efficiency vs. theoretical performance, which is not flopflation, it's just... how GPUs work, and a statement you can apply to ANY GPU, including GCN, RDNA 2, etc. Actual flopflation relates to dual-issue which is a RDNA 3 thing.
 
This is a huge leap from switch 1. PS5 performs a lot better than PS4, but switch 1 couldn't achieve this look at all. Let alone with playable performance.
 
Also copied from another forum as don't have enough experience with this tech. It's not that simple from what i understand. Also it seems that DF is just wrong about a lot.
No, Alex is right. Ampere allowed for dual issue FP32 vs FP32/INT32 of Turing. Blackwell allows for INT32/INT32 now, but it is rather useless for games. When he says 1.4 TF it is in reference to Turing/RDNA just to set line in expectation from what most people would expect regarding performance. He not literally claiming the GPU is 1.4 TF. Also, Ampere/RDNA+ are both way more efficient vs GCN in the PS4 so it's not as simple as 1.4 vs 1.84 there either.
 
Last edited:
No I recall they progressively allowed higher clocks for more demanding titles, first time I saw this was for Mortal Kombat 11
They eventually unlocked a higher portable GPU clock speed and a higher CPU clock speed for loading (to assist with decompression and the like). CPU speeds during gameplay were never boosted, and GPU clocks in docked never got a boost.
 
the reason is that Nvidia implemented it differently.
AMD simply added the possibility of running 2 instructions per shader, which is pretty much mostly useless in games, as it only applies to instances where you have to do the same instruction twice.

Nvidia actually doubled the amount of cuda cores, but if I understand it correctly only 2/3 of them can do floating point calculations while the rest can only do integer calculations (it gets a bit complicated as well as not all of the floating point shaders can be utilised when the integer ones are being used... and all in all I don't fully understand the pros and cons of this approach tbh)

so it's extremely hard to compare Ampere to RDNA3 and especially to RDNA2.

generally speaking Ampere TFLOPS are not quite as "inflated" as RDNA3 TFLOPS. on RDNA you can almost perfectly split RDNA3 TFLOPS in half and you get an estimate of an RDNA2 GPU's perofmance, aka. real world gaming performance.
On Ampere it's more like deducting 30%~40% to roughly be comparable to RDNA2. but that's very murky still...

so basically, they are very hard to compare 1 to 1 as they are very different architectures.
It's not that only 2/3 of the CUDA cores can do FP32 calculations. It's that one of the two datapaths per SM can execute either 16 x FP32 operations or 16 x INT32 operations. So you can achieve double the FP32 performance, but only in non-integer workloads.

"In the Turing generation, each of the four SM processing blocks (also called partitions) had two primary datapaths, but only one of the two could process FP32 operations. The other datapath was limited to integer operations. GA10x
includes FP32 processing on both datapaths, doubling the peak processing rate for FP32 operations. As a result, GeForce RTX 3090 delivers over 35 FP32 TFLOPS, an improvement of over 2x compared to Turing GPUs"



And the other thing is that Ampere cards see fairly consistent performance relative to past generations. The 3080 and 3070 have the same number of SMs as the 2080 Ti and 2080 respectively, and very similar clock speeds. The doubling of FP32 pushes them into 6900 XT and 2080 Ti class performance, but we don't see "regressions" where they really act like a 2080 Ti or 2080. So I don't think we should treat the benefit of the extra FP32 as a "best case", like with RDNA 3, but as a standard part of the card's performance profile.
 

This guy is intentionally missing the point. Comparing Switch2 docked to Steam Deck is all well and good, but the whole point of both devices is portability.

These devices were built for handheld usage. So naturally the comparison people want to see is both in handheld mode.

Nobody has had the opportunity to do that properly yet for obvious reasons, but this nob is carrying on as if it's a ludicrous proposition.
 
We've finally got early fps analysis of the released docked footage for cyberpunk.



I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I want to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.

Feels bad.gif

whaaaattt?? same vid in op peoples says look goord 30 fps PERFORMENCE MODE 90 percents of time derstroy steemdeck




FsioefR.jpeg


cCa3tk4.jpeg

I don't know who either of these are, but the second one doesn't have a fps reading, and the first one is using trdrop (it's old and doesn't work too great).
Don't trust any randos readings/videos.
Edit: Here's TRDROP in case anyone wants to try it for themselves, though I don't think it will be all that accurate.
 
Last edited:
This guy is intentionally missing the point. Comparing Switch2 docked to Steam Deck is all well and good, but the whole point of both devices is portability.

These devices were built for handheld usage. So naturally the comparison people want to see is both in handheld mode.

Nobody has had the opportunity to do that properly yet for obvious reasons, but this nob is carrying on as if it's a ludicrous proposition.
Switch 2 clearly outshining Steam Deck in docked mode is not an unfair comparison. Its hybrid nature is one of the selling points of the system. It's not Switch 2s fault Steam Deck can't dock and upscale graphics accordingly.
 
Switch 2 clearly outshining Steam Deck in docked mode is not an unfair comparison. Its hybrid nature is one of the selling points of the system. It's not Switch 2s fault Steam Deck can't dock and upscale graphics accordingly.

Steam Deck can dock and upscale....FYI
 
Last edited:
Yea, I heard the Steam Deck docked upscaling experience is kinda janky and doesn't really work for the higher end AAA games.

You can do it but YMMV

It is doable, but I wouldn't recommend it for more demanding games. The upscaler ain't DLSS by any stretch.
 
Last edited:
Steam Deck can dock and upscale....FYI
Earlier in the thread I was called out for stating it did and people were like "dO yOuR reSEaRch" and told me Steam Deck doesn't run games better when it's docked. So which is it? Where the goal post at? Cement it in place please.
 
Earlier in the thread I was called out for stating it did and people were like "dO yOuR reSEaRch" and told me Steam Deck doesn't run games better when it's docked. So which is it? Where the goal post at? Cement it in place please.

My understanding is Steam Deck doesn't allow for higher TDP when docked so that is correct. Think I misunderstood your point when you said "Steam Deck can't dock".
 
That was a different CPU profile only for loading screens iirc, but I'm open to be surprised, not like it will need it tho, what it would actually need is a TV only version with full GPU power unlocked and every set to max theoretical frequencies that doesn't potentially affect game logic like CPU, say memory bandwidth, etc... Make me happy Nintendo.
I wish that were the case but it'll never happen. The only reason the Switch took off was because it brought over the 100+ million 3DS users who they had no where else to go. The 10 million WiiU users came along for the ride as well but Nintendo has always viewed the Switch as a handheld first, home console second.
 
Last edited:
We've finally got early fps analysis of the released docked footage for cyberpunk.



I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I want to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.

Feels bad.gif

A performance video by Woke Vibes gaming? Come on
 
I didn't say AI buuut you think More Modern Architecture, DLSS, G-Sync (VRR), More Capable CPU makes no differenses? oohoohoohoohoo

I didn't mention AI, so if that's being brought in as part of the performance conversation, it's worth clarifying that AI can assist in certain scenarios (like upscaling or asset generation), but it doesn't fundamentally overcome hardware limitations.

It's not at all a substitute for raw power.

And, yes, architectural improvements, DLSS, VRR, and a stronger CPU/GPU absolutely make a difference, but they operate within the boundaries of the hardware. The core specs still define the performance ceiling.

I mean we have modern 3-cylinder engines outperforming older, larger engines thanks to efficiency gains and better design... but you probably don't want one, and those engines aren't without the same limitations of their class, they're just better optimized. Same applies here.

SNSW2 can be impressive for what it is, but it still is what it is.
 
Last edited:
Top Bottom