LordOcidax
Member
Having a bigger user base to sell is now a bad thing?.
It's going to be interesting how these third-party games will perform on an ecosystem that has become irrelevant in that sphere.

Having a bigger user base to sell is now a bad thing?.
It's going to be interesting how these third-party games will perform on an ecosystem that has become irrelevant in that sphere.
How can you explain flops that release everywhere then?.Having a bigger user base to sell is now a bad thing?
That's game fault… Still, bigger user base = more sales, by simple logic.How can you explain flops that release everywhere then?.
what i am saying is that SW2 will be a new console, i wonder if third party games will see sucesss when Nintendo's consoles have lost that aspect of their identity
Thousands? Based on console sales there's easily at least ten million switch owners who don't own a PS/MS console.For the thousands of Switch owners that never bought/played it on Sony or MS' platforms?
Not every gamer owns every system or has played every game available on every system.
Also for those that want a superior portable version of Cyberpunk and doesn't want to pay $500+ for a system that they have to then tinker with settings to get a halfway decent version of the game.
What you think are the real TF on Switch 2? Just for curiosity![]()
So TF number is fake?
*potential salesThat's game fault… Still, bigger user base = more sales, by simple logic.
Oh I know, I was just low-balling the number to see how he'd reactThousands? Based on console sales there's easily at least ten million switch owners who don't own a PS/MS console.
That was a different CPU profile only for loading screens iirc, but I'm open to be surprised, not like it will need it tho, what it would actually need is a TV only version with full GPU power unlocked and every set to max theoretical frequencies that doesn't potentially affect game logic like CPU, say memory bandwidth, etc... Make me happy Nintendo.According to Richard, The Switch 1 received an increase in the GPU clock speed at some point. So, it's totally possible that could happen again. If the Max Speed information is there, is for a reason.
But the TF formula used for this machine is basically the same used in AMD RDNA 2 used for consoles GPU iirc, this is not including the double issue as far as I can tell, and it is behaving exactly as one would assume compared to Xbox Series S3.1 is the real TF number. the issue is that it's hard to compare that to GCN or RDNA2 TF numbers.
neither RDNA2 nor GCN use dual issue FP32, Ampere does.
and there's a reason Sony didn't even try saying the PS5 Pro GPU is 32 TFLOPS, as they knew it would set the wrong expectations.
they could have btw., and it wouldn't have been a lie... the PS5 Pro is a 32 TFLOPS machine... but it's a 32 TFLOPS RDNA3 machine, and RDNA3 uses dual issue FP32, and so this 32 TFLOPS RDNA3 GPU is only 45% faster than the 10.28 TFLOPS RDNA2 GPU of the base PS5.
you can't even directly compare a 3.1 TFLOPS Nvidia GPU to a 3.1 TFLOP RDNA3 GPU, even tho they both use dual issue FP32. so comparing an Nvidia dual issue FP32 GPU to an AMD GPU that doesn't use it is not gonna give you good estimates of how performance compares.
no, but TF numbers have become very murky in the last couple of years.
the ROG Ally is an 8 TF handheld for example... and that's not a fake number either, but if you just look at that number and expect it to run games better than a Series S, and almost as well as a PS5, you will be disappointed
no, but TF numbers have become very murky in the last couple of years.
the ROG Ally is an 8 TF handheld for example... and that's not a fake number either, but if you just look at that number and expect it to run games better than a Series S, and almost as well as a PS5, you will be disappointed
How can you explain flops that release everywhere then?.
what i am saying is that SW2 will be a new console, i wonder if third party games will see sucesss when Nintendo's consoles have lost that aspect of their identity
We've finally got early fps analysis of the released docked footage for cyberpunk.
I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I watch to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.
Feels bad.gif
We've finally got early fps analysis of the released docked footage for cyberpunk.
I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I want to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.
Feels bad.gif
its all true i play old builds at experience event and waaaaaaaaaaaaay better now cdpr will patch for years likes witcher bookmarkedWhat??The game runs at 30fps like above 95% of the time, especially in some heavy fighting scenes and is not even out yet. There's room for improvement.
So handheld is 1.7 and docked is 3.1 Tf. Got it.3.1 is the real TF number. the issue is that it's hard to compare that to GCN or RDNA2 TF numbers.
neither RDNA2 nor GCN use dual issue FP32, Ampere does.
and there's a reason Sony didn't even try saying the PS5 Pro GPU is 32 TFLOPS, as they knew it would set the wrong expectations.
they could have btw., and it wouldn't have been a lie... the PS5 Pro is a 32 TFLOPS machine... but it's a 32 TFLOPS RDNA3 machine, and RDNA3 uses dual issue FP32, and so this 32 TFLOPS RDNA3 GPU is only 45% faster than the 10.28 TFLOPS RDNA2 GPU of the base PS5.
you can't even directly compare a 3.1 TFLOPS Nvidia GPU to a 3.1 TFLOP RDNA3 GPU, even tho they both use dual issue FP32. so comparing an Nvidia dual issue FP32 GPU to an AMD GPU that doesn't use it is not gonna give you good estimates of how performance compares.
no, but TF numbers have become very murky in the last couple of years.
the ROG Ally is an 8 TF handheld for example... and that's not a fake number either, but if you just look at that number and expect it to run games better than a Series S, and almost as well as a PS5, you will be disappointed
That's factually inaccurate. The Wii U numbers as comparison have been brought up on these very boards; it's just that the goalposts have been moved about 26.2 miles away from the original stadium.Nobody is disputing that.
It doesn't at all. It frequently drops from 30 but to like 28. Then occasionally it drops to like 24-25. Then you have the rare drops which I highlighted.What??The game runs at 30fps like above 95% of the time, especially in some heavy fighting scenes and is not even out yet. There's room for improvement.
So handheld is 1.7 and docked is 3.1 Tf. Got it.
We've finally got early fps analysis of the released docked footage for cyberpunk.
I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I want to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.
Feels bad.gif
if you frame by frame the video during a few of the spots where its supposed to have dropped to 17fps, you get a complete, full frame every time you press frame advance during those drops, which is what you would expect if its running at 30fps with a 30fps youtube video, not if its running at 17 - 20. such as the one at this timestamp, and it doesn't look like it suddenly drops to half its intended framerate either, are we sure this is even working correctly? also weird the overlay is not on screen for some of the clips or the entire back half of the footage. I don't get it.
EDIT. yeah watching a DF framerate analysis, anytime there's a dropped frame, you have to frame advance twice to move past it and get another frame. That isnt happening in this footage AT ALL. example here, frame advance through one of those framerate drops and you'll see it takes more than one press per frame anytime it isnt running at 60.
It looks like his software is interpreting a large spike as a drop in framerate lasting multiple, even dozens of frames, when its only a single spike in the graph, and not repeated spikes like it should be if you're dropping frames, so this framerate counter is not accurate at all i think? as an example, there's a single large spike on the frametime graph, and the framerate counter interprets that one, single spike, as a ton of dropped frames in a row, when it can't be, evidenced by the fact that you still get a full frame every time you advance it. IDK, maybe i'm wrong, but it seems odd relative to framerate graph behavior from DF and other places.
geniusjust going to copy this from another forum.
But the TF formula used for this machine is basically the same used in AMD RDNA 2 used for consoles GPU iirc, this is not including the double issue as far as I can tell, and it is behaving exactly as one would assume compared to Xbox Series S
What's clear by watching the video is that the graph is not in sync with the video. I noticed that right away however just by watching the video alone, you can clearly see the drops. Almost every single cyberpunk video released so far has drops. It's so easy to see on oled it's not even funny.just going to copy this from another forum.
It's my first time having an online stalker, kinda of nice lol. I do enjoy that I live rent free in your head.genius
i new i never saw thirs frame drop
![]()
3.1 is the real TF number. the issue is that it's hard to compare that to GCN or RDNA2 TF numbers.
neither RDNA2 nor GCN use dual issue FP32, Ampere does.
and there's a reason Sony didn't even try saying the PS5 Pro GPU is 32 TFLOPS, as they knew it would set the wrong expectations.
they could have btw., and it wouldn't have been a lie... the PS5 Pro is a 32 TFLOPS machine... but it's a 32 TFLOPS RDNA3 machine, and RDNA3 uses dual issue FP32, and so this 32 TFLOPS RDNA3 GPU is only 45% faster than the 10.28 TFLOPS RDNA2 GPU of the base PS5.
you can't even directly compare a 3.1 TFLOPS Nvidia GPU to a 3.1 TFLOP RDNA3 GPU, even tho they both use dual issue FP32. so comparing an Nvidia dual issue FP32 GPU to an AMD GPU that doesn't use it is not gonna give you good estimates of how performance compares.
no, but TF numbers have become very murky in the last couple of years.
the ROG Ally is an 8 TF handheld for example... and that's not a fake number either, but if you just look at that number and expect it to run games better than a Series S, and almost as well as a PS5, you will be disappointed
Because what I hear in that word salad from Alex is a description of Ampere architecture as "dual-issue" (it isn't), followed by a commonly repeated misconception about the architectural evolution between Pascal, Turing, and Ampere, followed by an explanation of how GPUs don't reach their theoretical numbers (no shit), conflating that with "flopflation" and then a "well I guess flops mean nothing and we shouldn't talk about them anymore".
This topic has been hashed over multiple times in this thread. Do we really want to reheat it yet again?
TLDR - according to the logic Alex used to justify that Switch 2 is really only 1.4tf/2.6tf, I could also argue that PS4 is only 1.2tf. So what?
Regarding the architectural confusion, check this out, maybe it will help:
GTX 1080: 20 SMs, 2560 cores, 8.873 TFLOPs
RTX 2080: 46 SMs, 2944 cores, 10.07 TFLOPs
RTX 3080: 68 SMs, 8704 cores, 29.77 TFLOPs
If you don't understand what happened, you probably look at these numbers and think that Turing was a disappointing side-grade that mostly just added RT and DLSS support, and then Ampere was a pretty good leap forward from Turing but it's definitely not 3x over Turing because the benchmarks don't bear that out, so that must mean that this whole split INT/FP thing means that Ampere never uses half it's cores, and Nvidia counts them anyway, and really, Ampere is only 4352 cores and 14.88 TFLOPs. But that is NOT what happened.
What actually happened is that Turing was a massive leap forward in density, but Nvidia chose to spend it by trying to gain FP efficiency while creating a card that was better at other tasks like ML, by splitting the cores into an INT stack and an FP stack. The INT stack doesn't do FP and therefore by definition does not count towards FLOPs. Games are mostly FP heavy, but INT still happens as some part of executing code, so you have an imperfect mix, but since the INTs are separate you get fantastic efficiency out of your FP unit. But half your silicon is barely working, what a waste! So Ampere fixed it - the INT stack can also do FP, so now all cores count towards FLOPs again, and if there is no INT work you get your full FLOPs (or close) for FP work. This sacrifices some of Turings fantastic FP efficiency, but gives you way more total FP capacity.
If Turing had stayed like Pascal, it would have counted as 5888 cores, and 20.14 TFLOPs, and then Ampere's performance would benchmark right in line with expectations. Instead, Turing didn't claim half it's performance win, so when Ampere came along, it claimed both Turing and Ampere's performance wins, and looked like way more of a leap than it actually was, causing people to not believe it, and assume Nvidia had pulled some kind of "flopflation" fast one. They didn't. The statement of "1.4 tf not 1.7 tf" is purely a statement of Ampere's efficiency vs. theoretical performance, which is not flopflation, it's just... how GPUs work, and a statement you can apply to ANY GPU, including GCN, RDNA 2, etc. Actual flopflation relates to dual-issue which is a RDNA 3 thing.
No, Alex is right. Ampere allowed for dual issue FP32 vs FP32/INT32 of Turing. Blackwell allows for INT32/INT32 now, but it is rather useless for games. When he says 1.4 TF it is in reference to Turing/RDNA just to set line in expectation from what most people would expect regarding performance. He not literally claiming the GPU is 1.4 TF. Also, Ampere/RDNA+ are both way more efficient vs GCN in the PS4 so it's not as simple as 1.4 vs 1.84 there either.Also copied from another forum as don't have enough experience with this tech. It's not that simple from what i understand. Also it seems that DF is just wrong about a lot.
No I recall they progressively allowed higher clocks for more demanding titles, first time I saw this was for Mortal Kombat 11That was a different CPU profile only for loading screens iirc,
We've finally got early fps analysis of the released docked footage for cyberpunk.
Really didn't know that, gotta make some research then, thanksNo I recall they progressively allowed higher clocks for more demanding titles, first time I saw this was for Mortal Kombat 11
They eventually unlocked a higher portable GPU clock speed and a higher CPU clock speed for loading (to assist with decompression and the like). CPU speeds during gameplay were never boosted, and GPU clocks in docked never got a boost.No I recall they progressively allowed higher clocks for more demanding titles, first time I saw this was for Mortal Kombat 11
It's not that only 2/3 of the CUDA cores can do FP32 calculations. It's that one of the two datapaths per SM can execute either 16 x FP32 operations or 16 x INT32 operations. So you can achieve double the FP32 performance, but only in non-integer workloads.the reason is that Nvidia implemented it differently.
AMD simply added the possibility of running 2 instructions per shader, which is pretty much mostly useless in games, as it only applies to instances where you have to do the same instruction twice.
Nvidia actually doubled the amount of cuda cores, but if I understand it correctly only 2/3 of them can do floating point calculations while the rest can only do integer calculations (it gets a bit complicated as well as not all of the floating point shaders can be utilised when the integer ones are being used... and all in all I don't fully understand the pros and cons of this approach tbh)
so it's extremely hard to compare Ampere to RDNA3 and especially to RDNA2.
generally speaking Ampere TFLOPS are not quite as "inflated" as RDNA3 TFLOPS. on RDNA you can almost perfectly split RDNA3 TFLOPS in half and you get an estimate of an RDNA2 GPU's perofmance, aka. real world gaming performance.
On Ampere it's more like deducting 30%~40% to roughly be comparable to RDNA2. but that's very murky still...
so basically, they are very hard to compare 1 to 1 as they are very different architectures.
Do you really believe that?… DF said that in docked mode = Base PS4
We've finally got early fps analysis of the released docked footage for cyberpunk.
I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I want to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.
Feels bad.gif
whaaaattt?? same vid in op peoples says look goord 30 fps PERFORMENCE MODE 90 percents of time derstroy steemdeck
![]()
![]()
Switch 2 clearly outshining Steam Deck in docked mode is not an unfair comparison. Its hybrid nature is one of the selling points of the system. It's not Switch 2s fault Steam Deck can't dock and upscale graphics accordingly.This guy is intentionally missing the point. Comparing Switch2 docked to Steam Deck is all well and good, but the whole point of both devices is portability.
These devices were built for handheld usage. So naturally the comparison people want to see is both in handheld mode.
Nobody has had the opportunity to do that properly yet for obvious reasons, but this nob is carrying on as if it's a ludicrous proposition.
Switch 2 clearly outshining Steam Deck in docked mode is not an unfair comparison. Its hybrid nature is one of the selling points of the system. It's not Switch 2s fault Steam Deck can't dock and upscale graphics accordingly.
Yea, I heard the Steam Deck docked upscaling experience is kinda janky and doesn't really work for the higher end AAA games.Steam Deck can dock and upscale.
Yea, I heard the Steam Deck docked upscaling experience is kinda janky and doesn't really work for the higher end AAA games.
You can do it but YMMV
Earlier in the thread I was called out for stating it did and people were like "dO yOuR reSEaRch" and told me Steam Deck doesn't run games better when it's docked. So which is it? Where the goal post at? Cement it in place please.Steam Deck can dock and upscale....FYI
Earlier in the thread I was called out for stating it did and people were like "dO yOuR reSEaRch" and told me Steam Deck doesn't run games better when it's docked. So which is it? Where the goal post at? Cement it in place please.
I wish that were the case but it'll never happen. The only reason the Switch took off was because it brought over the 100+ million 3DS users who they had no where else to go. The 10 million WiiU users came along for the ride as well but Nintendo has always viewed the Switch as a handheld first, home console second.That was a different CPU profile only for loading screens iirc, but I'm open to be surprised, not like it will need it tho, what it would actually need is a TV only version with full GPU power unlocked and every set to max theoretical frequencies that doesn't potentially affect game logic like CPU, say memory bandwidth, etc... Make me happy Nintendo.
We've finally got early fps analysis of the released docked footage for cyberpunk.
I hate to say it but it's not looking good. It can drop as low as 17fps as seen at 11:56. I think that the insane claims that it was anywhere near a series s can end. With internal resolutions of sub 720p in performance modes and it can't even hold 30 fps? The more I see of cyberpunk, the more I want to cancel my preorder for the game. It's pretty much $100 cad for a near 5 year old game that drops to 17 fps on new hardware.
Feels bad.gif
I didn't say AI buuut you think More Modern Architecture, DLSS, G-Sync (VRR), More Capable CPU makes no differenses? oohoohoohoohoo