if Senua's Saga: Hellblade II is actually what we can expect for next gen consoles does that mean rtx2080 will be outdade by 2020

It is abit funny reading this thread now we know thanks to DF it was 60% of pixel pushed for native 4k /30(sub 4k and 24 fps) .DF is saying they r using high end pc for this prerendered in engine cutscene as xsx silicone could not hsve been ready to make this for studio their size. In other words: it has no bearing on how they game will look during gameplay. Ninja theory removed the word real time from the anouncement tweet aftr the DF video as well.
 
Honestly as ive collected my thoughts by now i dont feel like the hellblade demo was really that impressive!

They start with a barren landscape with volumetric volcano smoke which is deffinetely prebaked and they did that with current gen on ryse, tombraider, uncharted 4 as u can see here
Pnlf0kN.png

So its nothing new, and they end the trailer with a close up of the characters head so thats easy to do since the data in ram is all focused on her head!

I mean till today microsoft havent shown anything clear its all smoke and mirrors, its 2 games shown now, halo infinite with no gameplay whatsoever and hellblade same without gameplay! They are acting like cunts!

When sony revealed ps4 they came up with killzone gameplay, they showed tsushima on tgs gameplay so why the fuck are microsoft pussying around, if they cant do it then why not just quit and call it a day, the industry is called gaming and we play games not watch movies and cutscenes!

EBNpgtmWsAIY4dX.jpg
 
Last edited:
It was a teaser trailer a year before the thing releases. Name me one recent game that has gameplay in it's reveal video. 😂
From their competition: God of war .days gone etc...they all had extended gameplay in reveal

Lol at thinking hell blade is 2020 title .
 
Last edited:
History truly does repeat itself. Seen all the same pre console launch nonsense again and again and here comes the "consoles make gaming pc irrelevant" nonsense again.

Anyway.
 
History truly does repeat itself. Seen all the same pre console launch nonsense again and again and here comes the "consoles make gaming pc irrelevant" nonsense again.

Anyway.
I think its microsofts problem they never had and never have anything to show as usual ill sinply wait for ps5s reveal
 
Jesus the amount of horrible posts in this thread mind boggling. How do these people get out bed in the morning without somehow killings themselves with the sheets?
My mother comes down and takes the sheets off me and delivers my chocolate milk.
 
Last edited:
So whats a gaming card? what u classify as a gaming card is stupid. There is nothing like a gaming card.

Some graphics cards are terrible value at gaming.
They are mainly for compute and rendering not actually gaming.

Thats like buying a Xeon processor and wondering why its getting out performed at gaming by chips much much cheaper.

And its not my definition.....its the definition.

Quadro is Nvidia's brand for graphics cards intended for use in workstations running professional computer-aided design (CAD), computer-generated imagery (CGI), digital content creation (DCC) applications, scientific calculations and machine learning.

Geforce cards are for gaming.

So comparing a Quadro card to a console graphics card doesnt make much sense, because Quadros arent gaming cards.
 
Some graphics cards are terrible value at gaming.
They are mainly for compute and rendering not actually gaming.

Thats like buying a Xeon processor and wondering why its getting out performed at gaming by chips much much cheaper.

And its not my definition.....its the definition.

Quadro is Nvidia's brand for graphics cards intended for use in workstations running professional computer-aided design (CAD), computer-generated imagery (CGI), digital content creation (DCC) applications, scientific calculations and machine learning.

Geforce cards are for gaming.

So comparing a Quadro card to a console graphics card doesnt make much sense, because Quadros arent gaming cards.

And now facts:
1. Xeons are actually good gamong cpu's. Their price has higher margin. But if price is not a problem they are good. Also it doesn't have graphics core.
2. The only difference between Quadro and GeForce is software. I.e. software is a key component of a modern GPU performance. Mainly because of how stupidly high level modern gaming APIs are.
 
And now facts:
1. Xeons are actually good gamong cpu's. Their price has higher margin. But if price is not a problem they are good. Also it doesn't have graphics core.
2. The only difference between Quadro and GeForce is software. I.e. software is a key component of a modern GPU performance. Mainly because of how stupidly high level modern gaming APIs are.

Its almost like you are agreeing with me.

Ohh you are.

Dollar for Dollar Xeons arent good value for gaming
Dollar for Dollar Quadros arent good value for gaming.

Nowhere did I say either werent good for gaming, I was talking about value.
 
Some graphics cards are terrible value at gaming.
They are mainly for compute and rendering not actually gaming.

Thats like buying a Xeon processor and wondering why its getting out performed at gaming by chips much much cheaper.

And its not my definition.....its the definition.

Quadro is Nvidia's brand for graphics cards intended for use in workstations running professional computer-aided design (CAD), computer-generated imagery (CGI), digital content creation (DCC) applications, scientific calculations and machine learning.

Geforce cards are for gaming.

So comparing a Quadro card to a console graphics card doesnt make much sense, because Quadros arent gaming cards.
As were approaching mext gen consoles, what you call a quadro is what is going to be the minimum spec for next gen.... For instance a quadro rtx has 16gb of vram and 12 teraflops thats the same specs revealed for xbox series x.....

So there is nothing like a gaming card or a gaming processor the only thing relelvant is what the game you want to play requires or what the application your trying to run requires! Games arent a standard thing if they where wed still be playing pixelated graphics with 90s gpus
 
Just got my 3950x. Going to run rings around death stranding next year.

Pity consoles.

On topic, yea 2080 will be outdated next year. Planning to upgrade my 1080ti to 3080ti. Hopefully no Nvidia tax....
 
As were approaching mext gen consoles, what you call a quadro is what is going to be the minimum spec for next gen.... For instance a quadro rtx has 16gb of vram and 12 teraflops thats the same specs revealed for xbox series x.....

So there is nothing like a gaming card or a gaming processor the only thing relelvant is what the game you want to play requires or what the application your trying to run requires! Games arent a standard thing if they where wed still be playing pixelated graphics with 90s gpus

The Quadro RTX 5000?

The cheaper 2080Ti has 14Tflops......and doesnt cost 3000 dollars.
You choosing the Quadro is just to inflate the price of entry.

And as has been stated architectural differences will give different results when actually gaming.

So saying we need a Quadro RTX 5000 to keep up with next gen consoles is alittle misleading to say the least.

I can gurantee a 2080S will keep up with nextgen consoles once the 3000 RTXs come out, you still wont need to spend in excess of 1500 to beat the nextgen consoles.

Using Quadros as a bases for any gaming comparison s is disingenuous because of the inflated prices of Quadros.
 
I'm guessing it's going to end up between 2060 Super and 2070 Super.

Phil Spencer said 2x Xbox One X. Many people (including myself initially) took this as being 12 Tflops.

But 12 Navi teraflops is actually more like 2.5x Xbox One X, which used Polaris. DF concluded that Navi is 1.28x Polaris so I'm using that for this calculation.

I'm guessing it ends up actually being like 9-10 Navi Tflops which has the performance of ~12 Polaris Tflops.

And 9-10 Navi Tflops is in the range of 2060 Super to 2070 Super.


So a 5700XT. How exactly is the PC part going to be "left behind" next year if it's equivalent to the Xbox GPU?

This is a stealth console vs pc thread

and a stealth Nvidia vs AMD thread

amd a stealth xbox vs ps thread

It would be nice if we could define some of the terms so frequently used in threads like this. The technology will surely be "outdated" as it's the nature of technology to become obsolete, but modern GPUs and CPUs will likely compare well to next gen consoles. PC players will also have the option to turn down specs to get the games to run sufficiently, it's not as if console games won't use techno-wizardry to render those amazing graphics at high resolutions. AMD Boost is already dabbling in these waters and I imagine Nvidia has an equivalent tech.
 
Last edited:
The Quadro RTX 5000?

The cheaper 2080Ti has 14Tflops......and doesnt cost 3000 dollars.
You choosing the Quadro is just to inflate the price of entry.

And as has been stated architectural differences will give different results when actually gaming.

So saying we need a Quadro RTX 5000 to keep up with next gen consoles is alittle misleading to say the least.

I can gurantee a 2080S will keep up with nextgen consoles once the 3000 RTXs come out, you still wont need to spend in excess of 1500 to beat the nextgen consoles.

Using Quadros as a bases for any gaming comparison s is disingenuous because of the inflated prices of Quadros.
A 2080ti has 11gb of vram a quadro 5000 has 16gb this was my point because nextgen games will be made with 16+ gb of vram in mind and as fast as your 2080ti is it wont be able to keep up with the asset sizes next gen consoles push and if rumours are true that the ps5 will be using reram on the gpu as well that will double pc gamers investments. Meaning new pc gous might be bundled with ssds like amds radeon ssg which was a workstation card.

Remember on the time of xbox 360s release no pc gaming gpu could beat it and this is looking to be the same again.
 
A 2080ti has 11gb of vram a quadro 5000 has 16gb this was my point because nextgen games will be made with 16+ gb of vram in mind and as fast as your 2080ti is it wont be able to keep up with the asset sizes next gen consoles push and if rumours are true that the ps5 will be using reram on the gpu as well that will double pc gamers investments. Meaning new pc gous might be bundled with ssds like amds radeon ssg which was a workstation card.

Remember on the time of xbox 360s release no pc gaming gpu could beat it and this is looking to be the same again.

Im enjoying the back to back.
Remember that games use RAM and VRAM.
The 2080Ti is all VRAM.
We arent counting RAM yet.........I know our discourse my feel confrontational.....but I legit am enjoying going for it.

So let keep going.

Tell me an optimized game that will fill a 2080S Vram?
Let alone try fill lets say 8 or 16GB of System memory?
 
A 2080ti has 11gb of vram a quadro 5000 has 16gb this was my point because nextgen games will be made with 16+ gb of vram in mind and as fast as your 2080ti is it wont be able to keep up with the asset sizes next gen consoles push and if rumours are true that the ps5 will be using reram on the gpu as well that will double pc gamers investments. Meaning new pc gous might be bundled with ssds like amds radeon ssg which was a workstation card.

Remember on the time of xbox 360s release no pc gaming gpu could beat it and this is looking to be the same again.
VRAM.

Next-gen consoles games will use less than 16GB VRAM + SystemRAM.
Any PC today has more than 5GB SystemRAM for games plus the 11GB from 2080ti you have 16+GB VRAM + SystemRAM.

To be fair anybody with a 2080ti has a PC with over the 16GB SystemRAM that puts the overall RAM count at 27GB... remove 2GB for OS and you have 25GB available for games.
 
Last edited:
While custing 3x and its a single component ?

Lets get real here: You damn well know there are quite a Lot of Pc fanboys who want consoles to be way less powerful because of their typical insecurity and low level self steem

In every generation theres always a Pc crowd soeculating an average under powered gpu on consoles and they are ALL frustrated that this seems to be nowhere near the case this gen particularly.
I can Tell You that many people dont Just speculate, they basically have huge Desires for the consoles to have avearage at best gpu because of an ego problem. It Hurts their ego, their proud and money invested on that huge, 1000 pounds heavy You call desktop the Idea that a smaller and way cheaper box can offer a Power level more than expected for them and being in the end an excelente, better Deal than building a Pc wich gives headaches by Just suggestining (too much efforts, better Go to Walmart/bestbuy and grab a console on a single purchase lmao).

I Tell things How they are. Im not naive, my friend, and I can see emtions through words and I can read their minds, intentions and How their agenda works.

Also, every generation is a different historicsl context on both VALUE and TECH, partnerships etc. BOTH Sony and MS are embracing New stuff (rdna, custom super fast ssd etc) and shooting higher numbers than stadia numbers for obvious reasons, and Just remember that these are 2020 consoles,. Not actually 2019 consoles.


You gave them the business and left them hella tight, but real talk though, you laid out straight up facts, %100 ...especially it stung really bad with the "$500 console matches 2k Peecee" comment, you know how many hurt feelings you left and caused that day ?

You have no idea 😅
 
VRAM.

Next-gen consoles games will use less than 16GB VRAM + SystemRAM.
Any PC today has more than 5GB SystemRAM for games plus the 11GB from 2080ti you have 16+GB VRAM + SystemRAM.

To be fair anybody with a 2080ti has a PC with over the 16GB SystemRAM that puts the overall RAM count at 27GB... remove 2GB for OS and you have 25GB available for games.

You cannot add system ram to vram.
 
You cannot add system ram to vram.
No matter if it is VRAM or SystemRAM it is available to games.
And to be fair you can use SystemRAM as VRAM... it is called HBCC if I'm not wrong.

So yes you can add SystemRAM to VRAM but that was not the point of my post... the dude is comparing VRAM + SystemRAM on consoles to only VRAM on PC lol
 
Last edited:
And to be fair you can use SystemRAM as VRAM... it is called HBCC if I'm not wrong.

HBCC is inclusive. I.e. system ram must be larger than vram.

the dude is comparing VRAM + SystemRAM on consoles to only VRAM on PC lol

Consoles do not have "system ram" all of the RAM is unified and high bandwidth (i.e. it is VRAM, by PC standards).
I do not believe we will see hybrid ram systems. Maybe for OS, but not for games.
 
Hey better than their usual live action trailers from years past.

But I don't know why ppl feel like what was shown was so next gen🤔 In all honesty when part 1 came out for PS4 it was gorgeous too but once you play it and you see how restructed it is and repetitive you know why.
 
Last edited:
HBCC is inclusive. I.e. system ram must be larger than vram.



Consoles do not have "system ram" all of the RAM is unified and high bandwidth (i.e. it is VRAM, by PC standards).
I do not believe we will see hybrid ram systems. Maybe for OS, but not for games.
The point is PC uses SystemRAM and VRAM for games not just VRAM.
Consoles the same but it it unified but GPU and CPU allocate RAM separately.
The dude was comparing the SystemRAM + VRAM from console to VRAM in PC only.
 
The point is PC uses SystemRAM and VRAM for games not just VRAM.
Consoles the same but it it unified but GPU and CPU allocate RAM separately.
The dude was comparing the SystemRAM + VRAM from console to VRAM in PC only.
The ssd is the new system ram nowadays go lool at the radeon ssg gpu its using a 1 terabyte ssd soldered to the gpu as vram its a 1 terabyte frame buffer and this is whats inside nextgen gpus
 
The point is PC uses SystemRAM and VRAM for games not just VRAM.
Consoles the same but it it unified but GPU and CPU allocate RAM separately.
The dude was comparing the SystemRAM + VRAM from console to VRAM in PC only.

No. Unified ram can be used by both cpu and gpu (in cache coherent or pass through mode).
Nothing to allocate separately.
PC has architecture problems do not spin it as advantages.
 
No. Unified ram can be used by both cpu and gpu (in cache coherent or pass through mode).
Nothing to allocate separately.
PC has architecture problems do not spin it as advantages.
The allocation is separate.
GPU allocate its space and the CPU can access the results.
CPU allocate its space and the GPU can access the results.

small_hsa-slide-1.jpg


Main hUMA features:
  • Bi-Directional Coherent Memory - Any updates made by one processing element will be seen by all other processing elements, GPU or CPU
  • Pageable Memory - GPU can take page faults, and is no longer restricted to page locked memory
  • Entire Memory Space - CPU and GPU processes can dynamically allocate memory from the entire memory space
There is no spin lol
The advantages of hUMA is cost and easy code development.
To be fair PC scheme is better for performance because it has two independent bus... in consoles they tried to created another dependent bus but the performance is not optimal and the performance is great affect by it use.
 
Last edited:
The allocation is separate.
GPU allocate its space and the CPU can access the results.
CPU allocate its space and the GPU can access the results.

small_hsa-slide-1.jpg


Main hUMA features:
  • Bi-Directional Coherent Memory - Any updates made by one processing element will be seen by all other processing elements, GPU or CPU
  • Pageable Memory - GPU can take page faults, and is no longer restricted to page locked memory
  • Entire Memory Space - CPU and GPU processes can dynamically allocate memory from the entire memory space
There is no spin lol
The advantages of hUMA is cost and easy code development.
To be fair PC scheme is better for performance because it has two independent bus... in consoles they tried to created another dependent bus but the performance is not optimal and the performance is great affect by it use.

Tautology. If any cpu or gpu core has direct access to any ram page it's unified and not separate.
VRAM has multiple buses inside. That's why there is a "320bit bus" when GDDR6 bus width is 32bit only. It's 10 separate buses. Adding one more is laughable...
 
Tautology. If any cpu or gpu core has direct access to any ram page it's unified and not separate.
VRAM has multiple buses inside. That's why there is a "320bit bus" when GDDR6 bus width is 32bit only. It's 10 separate buses. Adding one more is laughable...
Each memory needs a bus... DDR4 is 32bits bus per chip.
But that is unrelated to the subject.

No matter if it is unified or not... CPU and GPU needs to allocate RAM.
That will never change.

If GPU allocated a part of the RAM then the CPU can access (read) its results in a hUMA system (unified).
In a nUMA system the CPU needs to ask GPU to get the data and sent to it (the CPU didn't have direct access to the RAM that GPU allocated).

In PC you have two big dedicated buses to access RAM... one for CPU access the SystemRAM (chips x 32bits) and other to the GPU access the VRAM (chips x 32bits)... they are exclusives so there is no performance impact in neither of them.

In Consoles you have hUMA system (unified) where there is a main bus to access the RAM that is shared between CPU and GPU... the issue is more the GPU uses the RAM less is left for the CPU (and vice versa) so the performance really go down due that... so consoles created a new bus to make (called Garlic in PS4) to make the CPU access the RAM directly without need to use the unified bus... that is like a bandaid because even with that config if the CPU uses too much bandwidth from the RAM the GPU disproportionately can't use rest of the bandwidth.

The PS4 is easy to code due the unified memory but the PC with two dedicated pool of memory will always offer better performance.
 
Each memory needs a bus... DDR4 is 32bits bus per chip.
But that is unrelated to the subject.

No matter if it is unified or not... CPU and GPU needs to allocate RAM.
That will never change.

If GPU allocated a part of the RAM then the CPU can access (read) its results in a hUMA system (unified).
In a nUMA system the CPU needs to ask GPU to get the data and sent to it (the CPU didn't have direct access to the RAM that GPU allocated).

In PC you have two big dedicated buses to access RAM... one for CPU access the SystemRAM (chips x 32bits) and other to the GPU access the VRAM (chips x 32bits)... they are exclusives so there is no performance impact in neither of them.

In Consoles you have hUMA system (unified) where there is a main bus to access the RAM that is shared between CPU and GPU... the issue is more the GPU uses the RAM less is left for the CPU (and vice versa) so the performance really go down due that... so consoles created a new bus to make (called Garlic in PS4) to make the CPU access the RAM directly without need to use the unified bus... that is like a bandaid because even with that config if the CPU uses too much bandwidth from the RAM the GPU disproportionately can't use rest of the bandwidth.

The PS4 is easy to code due the unified memory but the PC with two dedicated pool of memory will always offer better performance.

Oh god.
It's called Onion.
Garlic is the gpu bus.
And everything else is also not accurate.
In PC you access main RAM through ~20gb/s bus.
And VRAM through ~500gb/s bus.
But to access gpu page you need to put a barrier and then download stuff through pcie. Which is in ~4gb/s area. And will stall gpu in many occasions.
In PS4 there was a special non-cc bus for that so you can read or write stuff from CPU to GPU without stalling the render pipeline.
 
Oh god.
It's called Onion.
Garlic is the gpu bus.
And everything else is also not accurate.
In PC you access main RAM through ~20gb/s bus.
And VRAM through ~500gb/s bus.
But to access gpu page you need to put a barrier and then download stuff through pcie. Which is in ~4gb/s area. And will stall gpu in many occasions.
In PS4 there was a special non-cc bus for that so you can read or write stuff from CPU to GPU without stalling the render pipeline.
It is accurate but the name is my confusion.
Devs didn't code their games to have CPU accessing GPU page.

PS4 has serious bandwidth issues when CPU access the memory and it greatly affect the GPU access to the memory too.

ps4-memory-bandwidth-usage2.jpg
 
Last edited:
this thread is not about pc vs console jesus christ.
why you guys dont enjoy everything?
right now i play mostly on pc because its just better than my ps4 pro, but at same time im happy with quality of the exclusives on the console.
next year i will definitely buy a ps5 and if the games run better there than on my pc i will buy games there until eventually, pc surpass them for a good price and i will upgarde again
i just think xsx and ps5 will be good for the pocket compare to a pc
 
It is accurate but the name is my confusion.
Devs didn't code their games to have CPU accessing GPU page.

PS4 has serious bandwidth issues when CPU access the memory and it greatly affect the GPU access to the memory too.

ps4-memory-bandwidth-usage2.jpg

That's one slide without context.
Just a reminder on PC accessing GPU page effectively drops bandwidth to zero.
Because you don't have any cache coherent access at all. Only a dma through pcie on a barrier.
 
That's one slide without context.
Just a reminder on PC accessing GPU page effectively drops bandwidth to zero.
Because you don't have any cache coherent access at all. Only a dma through pcie on a barrier.
Devs doesn't code games to access GPU page via CPU... the CPU doesn't need the data that the GPU uses in games.
You won't have that issue on PC.

These options to CPU access GPU Pages on PC exists more to heavy GPU Computing tasks like HPC.
 
Last edited:
Devs doesn't code games to access GPU page via CPU... the CPU doesn't need the data that the GPU uses in games.
You won't have that issue on PC.

These options to CPU access GPU Pages on PC exists more to heavy GPU Computing tasks like HPC.

Obviously you will. Any occlusion query will need to read from gpu pages.
Any index buffer change (animations, physics) needs to be written into gpu pages.
Yes. Because PC architecture is so shitty you need to work around these issues.
 
Thing is, that crowd, as you refer to them, are generally right, and likely will be again. How do people not learn from the same thing happening time after time? Sony / Microsoft make huge boasts, they drop some reveals (which are actually running on PC), and when the actual specs are released and the huge downgrades are apparent, everyone who bought into the hype is disappointed.
Given the current generation of hardware we have seen nothing that seems un-reasonable compared to the existing consoles.
Yes they will and since consoles are a closed platform it'll have around twice the performance compared to a pc part as per Carmack.

EDit:

u37rxN2.png


Edit: @nkarafo laughing why?
that guy obviously don't optimize his PC games, otherwise he would find out this is pure lies! 🙈🙉🙊
 
I don't think memory will really be a problem. The games on consoles look great and doubling that memory (at least) should do great, pairing that with a fast SSD to reduce the streaming buffers should make for plenty of graphics memory.

There is never enough graphics memory. I can choke my 2080Ti very easily with just 1 load of assets for 1 character. So what you are saying it factually false.

There's plenty restricting about DX and Vulkan they expose a generic interface for hardware which is not generic this requires features to either not be exposed or for a abstraction layer to exist. For example of a API that does not have this issue GNM and the console versions of DX generally expose more then desktop APIs because they have a single target

Explain this in detail. Give me a function in Vulkan and compare it to GNM with a performance analysis. These consoles have had GNM for an entire generation and they've never done anything remotely mind-boggling in the graphics area. All I see is struggling FPS, small LOD, no tessellation, no advanced form of texture filtering, simple AO features and using upscaled 4k.. Where can you show me a real world example of this optimized speed due to using GNM?

There's plenty of difference between x86 microarchitectures that can make code that is fast on one architecture slower on another bs a difference function. Additionally targeting a single GPU allows you to do the exact same thing, writing a single code path that targets the exact architecture.

Give me an example. I know x86 assembly, so you can put up some code.

There's plenty more to graphics then fill rate but I think you'll see these consoles be fillrate monsters and additionally also probably contain some new AMD compression tech to help with bandwidth.

We'll see how many games are truly 4k as opposed to upscaled. We can start with the 2 reveals: Godfall and Hellblade 2. Did they run at true 4k?

The changes aren't "required" for a console what they do is allow you to make your GPU perform better then a desktop GPU of the same specs. It's insane to compare a PS4 to the Nvidia 20xx no one is doing that.

You can't even compare a PS5/XSX to a PC either. PC will still offer more power.
 
There is never enough graphics memory. I can choke my 2080Ti very easily with just 1 load of assets for 1 character. So what you are saying it factually false.

Irrelevant. Doesn't prove anything.
You can always deliberately exhaust any resource.


Give me a function in Vulkan and compare it to GNM with a performance analysis.

Gnm stuff is under nda.
But I have already got you an example: you can write actual command buffer piecewise to the memory ports. Which actually makes draw calls to zero out.
You can feed GPU pipeline from GPU itself.

Give me an example. I know x86 assembly, so you can put up some code

Assembly or shader lang formats are irrelevant.
What's relevant is how do you keep these pipelines always busy. And here a PC developer has much less tools at their disposal.
 
Irrelevant. Doesn't prove anything.
You can always deliberately exhaust any resource.

Yeap. I can. And the more content I want, the more memory I am going to need. Remember the old DOS days when they thought 1k would be enough RAM? If we are moving to ray-tracing, and more advanced rendering like in film, I am damn sure a measily 32G of VRAM won't be enough. Content will always get better. Always requiring more resources.

Gnm stuff is under nda.
But I have already got you an example: you can write actual command buffer piecewise to the memory ports. Which actually makes draw calls to zero out.
You can feed GPU pipeline from GPU itself.

All these tricks but no substance. Where was this used in a game you developed? Is it running faster than a top-tier PC? Surely a multiplatform game would use GNM (they are forced to).

What's relevant is how do you keep these pipelines always busy. And here a PC developer has much less tools at their disposal.

Are you saying that a PC developer doesn't keep it's pipelines busy?

In the end, it doesn't matter what kind of tricks a 1st party dev can do to a console box. The brute force power of having a PC to develop a game on will always over power a closed box console no matter what tricks you come up with, data packing, using every ounce of the pipeline isn't going to be enough to make your game look better and run faster than the same game on a high-end PC box.
 
Last edited:
Yeap. I can. And the more content I want, the more memory I am going to need. Remember the old DOS days when they thought 1k would be enough RAM? If we are moving to ray-tracing, and more advanced rendering like in film, I am damn sure a measily 32G of VRAM won't be enough. Content will always get better. Always requiring more resources.

That's a non-constructive argument. It is true, but doesn't give any new info on reality.

All these tricks but no substance. Where was this used in a game you developed? Is it running faster than a top-tier PC? Surely a multiplatform game would use GNM (they are forced to).

All of the game graphics is "tricks". "Substance" cannot be emulated in real-time. That's a reality.
Any trick that makes some effect possible is a good trick.

In the end, it doesn't matter what kind of tricks a 1st party dev can do to a console box. The brute force power of having a PC to develop a game on will always over power a closed box console no matter what tricks you come up with, data packing, using every ounce of the pipeline isn't going to be enough to make your game look better and run faster than the same game on a high-end PC box.

That's a philosophical argument.
Obviously you can make console games very hard to run on PC, on purpose.
But nobody does that.
Obviously there are a lot of physical phenomena that are very hard or straight impossible to brute-force with any hardware.
Even pretty "simple" ones, like "tire <-> road" interaction.
So in the end even for computational physics "tricks" are a way to go.
And historically for example animations in PC games sucked big time until DX12/Vulkan, because index buffer updates in a single thread are a bitch.

The bottom line being:
yes, PC hardware is powerful
no, PC platform as a hardware+software architecture is underutilized and straight out bad choice for games.
 
All of the game graphics is "tricks". "Substance" cannot be emulated in real-time. That's a reality.
Any trick that makes some effect possible is a good trick.

If we are to go there, even graphics in general is a trick. I think the main goal is to get as close to reality as possible. For that you need powerful hardware moreso than tricks. It's a nice trick to have an implicit light source be a point or a direction vector. Gives OK results, but it's time for sampling arbitrarily shaped light sources (which is what the real world is made of). It gives way more accurate results that look much better than the tricks. So I'd love to see a trick that gives results from a path-tracer that don't use importance sampling.

That's a philosophical argument.
Obviously you can make console games very hard to run on PC, on purpose.
But nobody does that.
Obviously there are a lot of physical phenomena that are very hard or straight impossible to brute-force with any hardware.
Even pretty "simple" ones, like "tire <-> road" interaction.
So in the end even for computational physics "tricks" are a way to go.
And historically for example animations in PC games sucked big time until DX12/Vulkan, because index buffer updates in a single thread are a bitch.

I can agree with that. But that doesn't mean someone couldn't have solved that index buffer update on the PC architecture.

The bottom line being:
...
no, PC platform as a hardware+software architecture is underutilized and straight out bad choice for games.
I wouldn't call it a bad choice for games because it looks better than it's counterparts and runs faster.
 
Top Bottom