Except that's not true, unless that game has an engine that relies very heavily on mesh compute. Otherwise, you're seeing giant TF leaps but modest (at best) increases in culling throughput, rasterization throughput, pixel fillrate, texture/texel fillrate etc.
You know, things that are a bit more important for gaming-related performances, at least until mesh shading becomes more universally used in commercial AAA games. But even with that in mind, at most for a long while it's just going to lead to higher-resolution textures and maybe a few more effects. Game budgets will absolutely not scale enough to meaningfully use 75 TF/92 TF whatever of compute power in any way other than as resolution and texture boosters.
Those HBM3 specs look kind of low, in fact they look closer to HBMNext which IIRC is more Micron's version of HBM2E that SK-Hynix has had for a few years now. The HBM3 specs I've seen mentioned are closer to 5 Gbps, and one company I think speculated it could reach 7 Gbps per pin.
Here is some more information on more recent HBM3 developments
That being said, they could always clock the pins below spec if it means hitting a certain power budget. But at that point, you have to start weighing if the power savings are worth it over the likely premium HBM3 would bring versus GDDR6/GDDR6X (maybe GDDR7 but I don't think that's coming anytime soon).