Martha Is Dead Dev Impressed By PS5’s Previously Unannounced Texel Density

Isn't ia more of a cpu (and hard work for devs first and foremost) thing?

I don't think teraflops has nothing to do with how good an ia is.
I was moreso just using flops aka computing speed as "use the improved overall power of the console and put it towards more complex AI", and correct me if I'm wrong, but doesnt it have something to do with overall AI as a improved computing power/speed directly correlates to how quickly (and how accurately depending on what the AI should do in a specific situation) an AI can respond to a players input? not to mention "learning" how a player reacts to the AI itself and creating a counter/plan/response more in line with human thinking.
 
Depends. GPUs are really good at calculating certain things really quickly. That's why there was so much noise about GPGPUs when the PS4/XB1 launched (that and their relatively weak CPUs). I believe AI was mentioned as one of the things that could really benefit from GPGPU time, but I could be wrong.

Anyway, aren't TFLOPs just a measurement of how many calculations per second a machine can do, CPU and GPU, or am I misremembering? TBH, I'm really only repeating things I've picked up lurking around here for the past two console gens, so don't shoot me if I'm off. Lol.
I think that ia is mostly devs work, some old games like fear have better ia than 95% of modern games, current gen hardware is already enough to create great ia, but like people says, IA don't sells much copies and it's a time consuming work for devs.
More complicated ia means more bugs and glitch that can happen if not tuned properly.
 
I was moreso just using flops aka computing speed as "use the improved overall power of the console and put it towards more complex AI", and correct me if I'm wrong, but doesnt it have something to do with overall AI as a improved computing power/speed directly correlates to how quickly (and how accurately depending on what the AI should do in a specific situation) an AI can respond to a players input? not to mention "learning" how a player reacts to the AI itself and creating a counter/plan/response more in line with human thinking.
Are we already at the point where ia can learn without devs working on it?
I think we are still in the phase where good ia is 99% devs hard work tbh.
Maybe in the future...

But you are right about power, i guess a more speedy hardware can accelerate ia reaction time but i'm no expert.
 
I think that ia is mostly devs work, some old games like fear have better ia than 95% of modern games, current gen hardware is already enough to create great ia, but like people says, IA don't sells much copies and it's a time consuming work for devs.
More complicated ia means more bugs and glitch that can happen if not tuned properly.
Very true, and it's a really hard thing to advertise, unlike better visuals. That said, as visuals start to plateau to the casual eye, I do hope they'll start putting more (machine and manpower) resources towards things like physics and AI.

It seemed like, after Halflife 2, every dev that could was jamming Havok physics into their games because it had become the standard. It would really only take one big game using physics or compelling AI as a core feature to take off and the landscape of what's important in game design could change entirely.

I can hope, anyway.
 
I was moreso just using flops aka computing speed as "use the improved overall power of the console and put it towards more complex AI", and correct me if I'm wrong, but doesnt it have something to do with overall AI as a improved computing power/speed directly correlates to how quickly (and how accurately depending on what the AI should do in a specific situation) an AI can respond to a players input? not to mention "learning" how a player reacts to the AI itself and creating a counter/plan/response more in line with human thinking.
Nah, compute shaders could theoretically help with calculating AI (just like anything else really..) but it would have no effect on AI being more responsive.

People need to stop thinking that better hardware will improve AI in games. Horsepower hasn't been the problem with bad AI since ever. It's devs trying to dumb down enemies so that a collective of enemies can still be beaten with relative ease. Of course a single opponent would look stupid in comparison then. It's a matter of game design, always was.
The AI learning is an interesting point though. I don't think there are many (if any) games that do this, also for game design purposes because a self learning AI is hardly controllable and you typically don't want that in your games. Using it as a form of parametrization is pretty possible however (like, just change a couple values that increase or lower enemy AI like field of view, sight memory etc.), but there are no real practical applications for GPGPU for that in common game AI yet as far as I'm aware.

As an avantgarde experimental game I could see this work however.
 
Very true, and it's a really hard thing to advertise, unlike better visuals. That said, as visuals start to plateau to the casual eye, I do hope they'll start putting more (machine and manpower) resources towards things like physics and AI.

It seemed like, after Halflife 2, every dev that could was jamming Havok physics into their games because it had become the standard. It would really only take one big game using physics or compelling AI as a core feature to take off and the landscape of what's important in game design could change entirely.

I can hope, anyway.
Great ia is a big selling point for me, i literally reaplay entire levels again and again to see how enemies react to different stimules in games with a decent enough ia.
 
Nah, compute shaders could theoretically help with calculating AI (just like anything else really..) but it would have no effect on AI being more responsive.

People need to stop thinking that better hardware will improve AI in games. Horsepower hasn't been the problem with bad AI since ever. It's devs trying to dumb down enemies so that a collective of enemies can still be beaten with relative ease. Of course a single opponent would look stupid in comparison then. It's a matter of game design, always was.
The AI learning is an interesting point though. I don't think there are many (if any) games that do this, also for game design purposes because a self learning AI is hardly controllable and you typically don't want that in your games. Using it as a form of parametrization is pretty possible however (like, just change a couple values that increase or lower enemy AI like field of view, sight memory etc.), but there are no real practical applications for GPGPU for that in common game AI yet as far as I'm aware.

As an avantgarde experimental game I could see this work however.
Lets use Watson (the jeopardy supercomputer) for example, do you think if you only gave it 10-15% of its current power that it would be able to compete or think at the level it is?

"
The Watson supercomputer processes at a rate of 80 teraflops (trillion floating point operations per second).
The system and its data are self-contained in a space that could accommodate 10 refrigerators​
"​

So youre telling me improved hardware doesn't correlate in someway to improved AI? The AI achievements of today could be replicated on 15, 30 year old GPU/CPU combos while keeping the same small household form factor?

And didn't Hello neighbor use some form of basic bottom level procedural AI that allowed the game to "learn" (and I put it in quotes cuz I'm sure its not real learning) your tactics and respond accordingly to them?
 
Lets use Watson (the jeopardy supercomputer) for example, do you think if you only gave it 10-15% of its current power that it would be able to compete or think at the level it is?

"
The Watson supercomputer processes at a rate of 80 teraflops (trillion floating point operations per second).
The system and its data are self-contained in a space that could accommodate 10 refrigerators

"

So youre telling me improved hardware doesn't correlate in someway to improved AI? The AI achievements of today could be replicated on 15, 30 year old GPU/CPU combos while keeping the same small household form factor?

And didn't Hello neighbor use some form of basic bottom level procedural AI that allowed the game to "learn" (and I put it in quotes cuz I'm sure its not real learning) your tactics and respond accordingly to them?
If power hasn't been the issue with AI for years what makes you think that improving power will help?
Devs have to choose to implement AI so complex and so heavy on computations that power would help. But they do not right now, so there's your answer.

Improved hardware does correlate to improved AI but only with generational leaps and I'm not talking a single console generation here.
Machine Learning for example was a thing in the 60ies, but machines weren't strong enough to train neural networks like we can today on a normal desktop PC.

If you wanted to train a neural network in more or less real time in a game, then GPGPU would help with that, but like I explained before that is not a useful scenario. It might be in the future when new techniques to model AI have been invented that help adress the issues that come with machine learning during play.

As for your Hello neighbor example, I don't know much about the game, but 'adaptive AI' per se is not new. The term 'Intelligence' in regards to AI and different algorithms is wery unclear among academics. A state machine, a common pattern used in AI, is not much more than a glorified series of if-else statements, yet it can produce 'AI' in games that appears to be smart, learning even. Changing parameters of AI depending on the game state is easy enough to do already, but we are talking about horsepower - and the kind of learning that horsepower could provide is not used in games due to unpredictability.

Game AI will not evolve unless devs chose to. That's the thing you should take away from this.
 
I agree with this 100%, which hopefully means people will stop falling for the "OMG 4K 60FPS" spiel and force more thought and complexity to be put into games.

Prolly wont happen this gen tho.
I wish but yeah it won't happen. The entire tech advancement and the desire to produce blockbuster-esque games has taken away the willingness to put money into ideas rather than tech.
If you combine ideas with tech, you end up having a high risk investment with amounts of money that businesspeople would not spend.

If you go with ideas only (low-medium tech) you end up having a good game but most likely won't reach mass market*

If you go with tech only you end up where we are today with bigger productions.

* this group can be roughly split into two subsets, the small indie side and the double A side.
Sadly, Double A has mostly died so even if some indie games come up with new and fresh ideas they generally don't have the budget (and often skill) to reach AA standards. I believe AA has to make a comeback for the industry to thrive in an artistic sense but it seems like I'm in the minority thinking that.
 
Did a quick research on all the texel density thing, it's basically nothing else than texture resolution for us, end consumers... So I guess the dev has all the right to be excited by going from PS4/Pro 5-5.5GB to 13-16GB or whatever the PS5 will have.
 
Im curious to know how theyll push the ps5 and series x memory and processing to the max interms of graphics fidelity and effects for instance in a new god of war game and still achieve 4k 60 as they boast because its impossible its either one or the other. 1080p 30 + ultra graphics or 4k 60 ps4 pro +- graphics

Only way im betting is with the memory bandwidth being twice as needed and the cpu being more faster than required. Like the x360 gpu bandwidth.
 
Last edited:
meh. FF7 just first episode, can be finished in a single day.
ghost of Tsushima? No real gameplay available. Major red flag. Could be horrible.
last of us? Meh. Played the first one, hated it, won't touch the second part, won't support SJW crap.

just another arrogant piece of shit poster!
 
Top Bottom