DeepEnigma
Gold Member
Looks like Skull & Bones isn't releasing before May 2021.
Title looked pretty ambitious. Rebooted into a next gen only game maybe?
At least it should get a better next-gen treatment then.
Looks like Skull & Bones isn't releasing before May 2021.
Title looked pretty ambitious. Rebooted into a next gen only game maybe?
I'm wondering if a next-gen Spiderman patch means it could potentially decrease the install size of the game?
Cerny has said they duplicate assets 400 times to optimize HDD seek patterns (a technique used since the PS1/CD era).
SSD space will be valuable, so better not squander it.
If that's the reason game updates are so bloated this gen, then wow... although I suspect 1080p (and soon 4K) assets don't help either.
Great minds think alike.I was actually thinking the very same thing after the article came out as well.
This business with "Arcturus" needs to stop. Arcturus is not an architecture. It's not a successor to Vega, it is a derivative of Vega. It is a singular chip with no graphics functionality whatsoever. It has no geometry, no ROPs, no TMUs, no display output. It is just for compute. It has no other function.The 128 CU part, yes, but in this case we're talking about a semi-custom chip with less (80) CUs, so they could have added ROPs/TMUs and other GPU-related circuitry.
Either way, we know Arcturus is a Vega successor and next-gen consoles are getting Navi/RDNA.
Sony might have also experimented with Vega-based devkits.
I still don't get why people went crazy over Sony's E3 that year.![]()
You're tilting at windmills here. Nobody said that.It would be absolutely pointless having a 128CU Vega graphics processor in a console
It was filled with megatons, from the Big Three (TLG, FF7R and Shenmue) to COD switching sides, or from Guerrilla doing something new to the jawdropping Uncharted 4 demo. Also, the conference had a really good pace that maintained the hype of previous reveals throughout the whole show.I still don't get why people went crazy over Sony's E3 that year.
I still don't get why people went crazy over Sony's E3 that year.
Hopefully new IP's, considering their developer lineup.So besides halo and forza what could xbox game studios realistically have for scarlett launch window?
14TF Navi should be as fast in games like 18TF polaris, so you really think power like that would be not enough to run 4K 60fps native in majority of games? Of course 12 cores CPU and 14TF GPU leak sounds too good to be true, especially at 500-599$ price point, but like I have said I would expect 4K native from next gen consoles, and especially when xbox x 6TF GPU can already run many games at 4K.you are not getting native 4k and you are not getting 60 fps.
BOTH things require 2x the GPU resources to go from 1440p to native 4k and from 30 fps to 60 fps. You are essentially taking a 10 tflops GPU and turning it into a 2.5 tflops GPU. Devs will never waste precious GPU cycles on rendering more pixels when they can use them to add detail to those pixels. They will never waste half of the GPU resources on 60 fps unless of course they are competing with CoD and need their multiplayer shooters to be 60 fps.
I do agree that if we are only getting 5700 performance, we can forget about 4k. but even at 12-14 tflops, you wont see devs target native 4k. unless of course, one console is 8 tflops and the other is 14 tflops in which case devs will target the lowest common denominator and simply use the remaining 6 tflops on pushing native resolution like they do with mid gen refreshes.
And yeah, my 2080 struggles to run games at native 4k 60 fps with ray tracing turned on. gears of war runs at 45 fps at native 4k. i really dont see how next gen gpus will run anything at native 4k 60 fps unless they are indie games not worried about pushing graphics effects like destruction, npcs, ray tracing and other kinds of simulations devs previously couldnt do. i expect to see 100% of open world games at 4kcb and 30 fps. it will be like uncharted 4, campaign 30 fps, multiplayer 60 fps.
14TF Navi should be as fast in games like 18TF polaris, so you really think power like that would be not enough to run 4K 60fps native in majority of games? Of course 12 cores CPU and 14TF GPU leak sounds too good to be true, especially at 500-599$ price point, but like I have said I would expect 4K native from next gen consoles, and especially when xbox x 6TF GPU can already run many games at 4K.
That's a lot of important people from some pretty big studios. It makes you wonder about the scope and scale of what they might be working on.This seems promising:
![]()
New The Initiative Xbox Exclusive in Early Stages of Development
It was only last year that The Initiative joined the roster of Xbox studios and they look that they're already making their own game. According to thesegmentnext.com
Lots of big names in that studio.
I found it to be the opposite for me.Based on the leaked concept arts plus the same company who made Forza Horizon I still hopes the better for the franchise.
Fable 1 was the best of all while 2 and 3 was mediocre.
I can't accept Fable 2 and Fable 3 with that combat gameplay.I found it to be the opposite for me.
This is hilariousBecause
![]()
![]()
Oh. I actually enjoyed 2's combatI can't accept Fable 2 and Fable 3 with that combat gameplay.
This is hilarious![]()
ah fanboy wet dreams .. when you want to be part of a crazy religious cult of console waring..
you are not getting it. Its not about whether or not the gpu is capable enough to do natuve 4k 60 fps, its whether devs will use the gpu resources to make better looking games vs making current gen looking games at a higher resolution.14TF Navi should be as fast in games like 18TF polaris, so you really think power like that would be not enough to run 4K 60fps native in majority of games? Of course 12 cores CPU and 14TF GPU leak sounds too good to be true, especially at 500-599$ price point, but like I have said I would expect 4K native from next gen consoles, and especially when xbox x 6TF GPU can already run many games at 4K.
Obviously ( as your only pretending not to know this) videos being referenced by the person i quoted...
Well you quoted me not him. The gifs are quite funny regardless of which side you're on.Obviously ( as your only pretending not to know this) videos being referenced by the person i quoted...
a gif of a religious cult leader throwing out half life 3 ?!?
literal gifs of war depicting the "console wars"
you are not getting it. Its not about whether or not the gpu is capable enough to do natuve 4k 60 fps, its whether devs will use the gpu resources to make better looking games vs making current gen looking games at a higher resolution.
Rdr2 uses 4.2 tflops jyst to render native 4k. Ps4 runs that game at 1080p using its 1.84 tflops. it will become much harder to render games at native 4k when devs start to pack more detail in each pixel. Forget about 60 fps in open world games.
LoL, what a lie!you are not getting it. Its not about whether or not the gpu is capable enough to do natuve 4k 60 fps, its whether devs will use the gpu resources to make better looking games vs making current gen looking games at a higher resolution.
Rdr2 uses 4.2 tflops jyst to render native 4k. Ps4 runs that game at 1080p using its 1.84 tflops. it will become much harder to render games at native 4k when devs start to pack more detail in each pixel. Forget about 60 fps in open world games.
learn to read. 1.8 tflops + 4.2 tflops = 6 tflops.LoL, what a lie!
PS4 pro renders 1920x2160 = 4milions pixsels, 4k is 8 milions pixels.
Ps4 Pro has slow memory bandwidth.
Cerny dubled the gpu cus from 18 to 36 and incresed the bandwidth only slightly from 176 --> 217GB/S.
The ps4 Pro is like little turd.
The mid refresh cycle was mistake, this is what you get selling console for less than 400$ with profit. Underpowered toy.
I hope nexgen will cost no less than 500$ with mayby 600$ BOM.
Skimming the last few pages for TF guesstimates, I see 9TF, 10TF and some posts about 14TF.
Where's all the insiders?
Skimming the last few pages for TF guesstimates, I see 9TF, 10TF and some posts about 14TF.
RDR2 is big open world game with insane graphics, and yet it runs on 6TF GPU. In order to run the same game at 60fps you will of course need even more GPU resources but 14TF Navi (18-20TF polaris equivalent) would have enough resources because it's like 3x xbox x power. Of course some developers would still choose 30fps even on 14TF Navi, but many developers would go for 60fps or at least offer 4K NATIVE. I would expect 4K native as a standard on PS5 the same way as people have expected 1080p from PS4. The thing is these days more and more people own 4K screens, and native picture offers unmatched quality. Xbox x and PS4P is mid gen refresh, so 4K is not a standard just yet, but on on PS5 and xbox 4 it should be.you are not getting it. Its not about whether or not the gpu is capable enough to do natuve 4k 60 fps, its whether devs will use the gpu resources to make better looking games vs making current gen looking games at a higher resolution.
Rdr2 uses 4.2 tflops jyst to render native 4k. Ps4 runs that game at 1080p using its 1.84 tflops. it will become much harder to render games at native 4k when devs start to pack more detail in each pixel. Forget about 60 fps in open world games.
Elsewhere. Playing it safe, hiding behind caution mostly. And who could blame them? At any moment something could be revealed that totally blows up their credibility. lol
Machine learning (among other things) will need tons of compute power to deliver a next-gen leap.If some rumors are true, PS5 and Scarlett will get some kind of machine learning capabillities built in.
10TF is probably where it's going to be.
There was a tweet some time back about how one of the console's GPUs was scoring over 20,000 on the Firestrike 3D Mark test*. The nearest graphics card to that 20,000 score is a GTX 1080 TI. That card itself runs at around 10-11TF.
There's plenty of videos on YouTube showing that card doing 40-50fps in games at 4K, over 60fps at 1440p, and well over 80fps in 1080p.
But since we're on a console and can optimise heavily, it wouldn't be unbelievable for games to hit 60fps in 4K. But, of course, this is entirely up the the devs.
(* - yes, yes, we're all well aware that 3DMark isn't in existence for consoles, but these are dev kits running all sorts of software tools and people find ways of making this stuff happen).
Machine learning (among other things) will need tons of compute power to deliver a next-gen leap.
What most people don't understand is that while 3D graphics are scalable (i.e. 14TF/4K -> 1440p/8-9TF downgrade), compute algorithms (such as neural networks, AI pathfinding, physics etc.) are NOT scalable by nature. We're talking about gameplay-enhancing algorithms and it's not acceptable to downgrade gameplay (unlike resolution).
So, if some game devs are really hellbent on deliving a next-gen leap via GPGPU algos, then you will also need raw compute power, aka high/double-digit TF.
I feel like some people are too fixated on the CPU, but it's not the CPU that is going to run all these crazy stuff. GPGPU isn't a forced "gimmick" because of Jaguar shenanigans. It's here to stay. FOREVER!
PCs are different, because there's PCIe latency between the discrete CPU and the GPU, so GPGPU isn't always beneficial. In PCs you need the CPU FPU/vector unit (which is in the same die, so zero latency) to do stuff like physics etc.
Consoles utilize a monolithic APU die and fast, unified DRAM. There is no game-breaking latency between the CPU and the GPU. Consoles are specifically made to take advantage of heterogeneous computing. If you're not using it, you're doing it wrong!
TL;DR: having a PC-centric (aka CPU-centric) way of thinking to understand consoles is an exercise in futility.![]()
I'm truly sorry if your brain capacity is too limited to understand what I wrote. Move along.Have you been drinking sir? That was alot of gibberish![]()
I'm truly sorry if your brain capacity is too limited to understand what I wrote. Move along.
ps: I don't drink. Alcohol is bad for your liver.
But need AI a dedicated hardware or they just can delivery via cloud? Thats important vision from Microsoft.
I was motivated by your post to explain some things, since the Vega/GCN vs Navi/RDNA (compute vs rasterization) flops debate just doesn't want to die.Well, what you wrote had nothing to do with my post in which you were replying to, so theres that.
I was motivated by your post to explain some things, since the Vega/GCN vs Navi/RDNA (compute vs rasterization) flops debate just doesn't want to die.
Why were you offended?
Here's another example of machine learning in the context of next-gen AI:
All these stuff have to run locally on the same hardware. No cloud BS (unless the whole game runs on the cloud).
We know that Navi will support INT4/INT8/INT16/FP16/FP32 acceleration, so it's going to be suitable for all sorts of compute workloads. No need for a dedicated "AI chip" or Tensor cores (like nVidia does). Where's the disagreement here?
It's optimized for all of them, not just for rasterization/3D graphics.Yes it supports INT4/INT8/INT16/FP16/FP32, but its not optimized for just one of them. You could make much better use of transistors arranged in a way to just utilize low precision, no?
Same. Are you not excited for next-gen, self-learning AI?Also, my post was in regards to character animation. I`m excited for what next gen means for those kinds of things. Not just graphics.
It's optimized for all of them, not just for rasterization/3D graphics.
AFAIK, Navi only lacks FP64 acceleration (which is a Vega uarch specific feature) and this makes sense, because FP64 requires a lot more transistors.
Example:
Navi 10TF (FP32)
FP16 -> 20TF (double performance at half accuracy)
INT8 -> 40 TOPS (integer operations per second, since it's not floating point anymore)
INT4 -> 80 TOPS
Machine learning uses INT8 right now and there's some research going on about INT4.
Modern GPUs aren't just for pixel/vertex shaders. I remember people mocking Rapid Packed Math, because they thought it was for pixel shaders. We live in 2019, not in 2003.
Same. Are you not excited for next-gen, self-learning AI?
This will be the biggest leap ever in terms of AI...
AMD has done some improvements, so that each CU can execute multiple workloads with differing accuracy. It's an evolution of Asynchronous compute if you will.Yes its optimized for all of them, which makes them not optimal for any one of them. Thats optimal for when you want flexibility, not efficiency. If they found a way to optimize 100% for all of them, that would be the holy grail. Theres always trade offs for flexibility.
No worries!Oh yes, I`m excited for every gameplay enhancing feature.
Sorry if I came off a little harsh btw, my apologies!