...So, because it looks good to you, it's the most significant technical achievement in gaming history?
Technologically, Hellblade 2 is three things: a cumulative usage of high-end existing technology, a prolonged investment of time and resources from top craftsmen in digital modeling and photogrammetry application, and an intelligent selection of limiting factors to optimize the impact of visual techniques.
It's the industry working at the best of its capacities.
It is not the industry changing the paradigm or breaking new ground.
I'm really looking forward to a Boundary Break or something beyond the bounds of Hellblade 2 to show how it works and how it pulls off its tricks, because I think people don't understand what it is actually doing. If we're going to talk about this game besides, "it looks very very pretty", it would be good for us to have a better idea how they got away with this impressive visual onslaught, and how this one game's techniques are made for this one game.
Like, when you get close up on Senua as her mind spins out of control and her world goes dark, that's an intense scene of drama, but it's also a good ol' Loading Elevator swapping out the level and changing the lighting while the camera and gameplay is stopped. (Same when it looks up into the clouds; also when it pans over rocky landscapes, I have a guess that those are procedural placement of the same few Quixel megascans flipped around and dropped in rather than an actual stretch of geometry specifically modeled for scenes.) The face model is incredible, building upon techniques used in the previous Hellblade as well as other games (Death Stranding, RE Village, Crime Boss Rockay City, the whole of MetaHumans,) but it accomplishes a lot of its lifelike human qualities by being a recording of a human; similarly, the action is 100% mo-capped, allowing for incredibly nuanced and visceral combat with the limitations of being it is made to play or blend in/out of what was physically recorded. And the use here of Nanite and photogrammetry is at the extraordinary level of scrutiny-defying quality that we've been waiting for since the debut presentation of UE5 in 2020, but homebrewers have been doing "photorealistic" work for 3 years already and it's just taken the industry time to actually ship the games they implemented this tech in way back when they too started their UE5 projects. (Plus, again, the slow pace and the mostly-nighttime or sunrise/sunset setting allows a lot of repetition and delayed asset loading so that they can prioritize higher-resolution objects than faster-paced games can manage.)
There aren't a lot of glitch reports or off-map cases to look at yet, but this fire glitch in photo mode kind of shows an example. Snap out of the limited viewpoint of the game in that scene where Senua is way far away from the giant as he burns, and you can see that the fire is actually a 2D element; not even a 3D VDB, which would be a technical achievement (well, kind of... it's still a playback rather than a fire sim, and I'm not sure experienced UE technicians would be all that impressed with laying in a sparse volume texture even if it's only now coming into realtime use.) It's a great fire effect for what this game needs; it'd perhaps not be a great fire effect for other games where players have full control over the playfield. They used the right trick for the right effect, and the limitations of the game design kept them from having to throw out the book in hopes of finding new ways to advance the technology.