Napalm_Frank
Member
So with PS4 having HuMa, GDDR5, more powerful GPU, and some other stuff I forgot.
How wide is the gap between Xbox One and PS4 in none technical terms.
Xbone: T-103
PS4: Nemesis
So with PS4 having HuMa, GDDR5, more powerful GPU, and some other stuff I forgot.
How wide is the gap between Xbox One and PS4 in none technical terms.
But have you SEEN Titan Fall?
PS4 is more powwrfull but you'll only see a difference in first party games probably
Just like between the PS2 and the GC/Xbox multip- oh wait...
It doesn't help the CPU power in the traditional sense (like making it faster, for instance) but it helps with redundancies and this combined with GPU compute will allow the CPU to have more cycles to do what it does best - work on the more intense processes (like complex AI, physics, etc).
An incredibly simple example that will make Durante yell at me (which he probably will anyway) is using their water example above. Let's say we're making a jet ski game. Typically the water needs to be rendered in a straight line copying the data back and forth between the CPU and GPU. The CPU is saying "the jetski is making a wake" and the GPU draws the waves coming off of it. This system allows both to be done simultaneously without swapping back and forth - so it's happening in parallel instead of in a straight line.
So what does that mean? Well, if our jetski on a non-hUMA product takes 20% of the available resources to render the water and the real time waves created by the jetskis - it may now (and i'm pulling a number of my ass here, but the % isn't the point) may take 10%. So that's 10% "extra" that they have. Once you add everything... the AI, the lighting, all of the animations, the graphical effects, particles, etc... you have more overhead to add more because of the cycles you saved on the water rendering using hUMA. So the physics may be even more in depth... or the water particle effects may be EXTRA crazy.
So, yeah, it doesn't make the processor more powerful but it makes the entire system more efficient. Which in turns makes games better. So to the end user it will feel like more power - but it's not literally more power. Upclocking would *literally* be more power, but hUMA has nothing to do with that.
so having a less amount of % used in rendering per say water waves, etc... I just don't see a point in where Multiplatform games won't use this extra juice... when they do this with certain PC ports... does this mean that an increase omph on another version may become a norm for consoles or do they not have the extra manpower/time to do so
Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology
PS2 vs Xbox Difference
FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%
Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.
Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.
GameCube vs Xbox Difference
FLOPS........10.5Gflops........21.6Gflops.......... 100%
Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3
Conclusion:
Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.
I am just going to come out and say it:
I don't have the foggiest clue what Huma is.
Xbone: T-103
PS4: Nemesis
No worries
I just find like in that other recent thread with DBZ power levels
2 out of 3 posts is about that and it tends to get out of hand
I do so love DBZ but it doesn't add much to the conversation
so having a less amount of % used in rendering per say water waves, etc... I just don't see a point in where Multiplatform games won't use this extra juice... when they do this with certain PC ports... does this mean that an increase omph on another version may become a norm for consoles or do they not have the extra manpower/time to do so
so I activate the power of hUMA with the L1 and L2 buttons?
I see.. so we have to wait awhileWell the problem is that PC ports use the extra oomph usually in the form of brute force. I'll try to explain (though last time I did durante yelled at me... is it obvious that i'm afraid of durante?)
When designing a game for a PC you aren't building to a specific system. You're building to an insanely wide array of systems. If you're building a Xbox 360 game you know exactly what to expect - each dvd drive is the same speed, each harddrive is the same speed, each GPU and CPU is the same, each has the same amount of ram, Etc. You get it. On the PC, I personally have a fairly modest 3.6ghz i7, 650ti, 8gbs of ram. Super PC gamer X has a better processor, a $1000 gpu, 16gbs of ram, etc. And then casual PC gamer Y has a laptop with a 2.2ghz i5, a year old mobile processor and 4gbs of ram. A developer wants the game to run on all of these systems.
The issue is that since a PC developer needs to worry about gamer Y's laptop that Gamer X's supercomputer isn't programmed to specifically. Things are generally just scaled up. If you have the extra processor speed/better ram/better HD you'll load much faster - if you have the better GPU you can turn on more effects. The games obviously look much, much better on Gamer X's super computer... but they don't look nearly as good as they would if the developer said "fuck gamer Y, fuck mortimer, i'm making a game specifically for Gamer X's system" and worked to the strengths as the system as a whole instead of the brute force of it just being faster.
That's why the modest, in terms of PC specs, consoles will make great looking games. The developers will learn these systems and work specifically on what they do well. Meanwhile there may be amazing features in my 650TI that never, ever get used because there just isn't a reason for a PC developer to hone in on that one card.
It may sound like I'm shitting on PC gaming but I'm not. PCs will always produce the best looking games because of their ability to scale upwards. Even though your graphics card won't have every trick inside it exploited like a console - it will still produce incredible graphics while it's relevant. By the time the PS4 launches my current PC will be two years old and it will run multiplats like BF4 better than the PS4 will.
This is why Crysis on the PC was such an amazing looking game though. It actually targeted high end systems at the time. For years afterwards when you got a PC the question was "ok but how well does it run crysis?" They programmed to the strengths of high end PCs at the time and it stayed in the category of amazing looking game for years because of it.
But back to your question... this isn't anything you can brute force. This doesn't make the CPU or GPU more powerful. It will take time to and expertise to exploit it. Most people, including Sony PR/Cerny, think it will be a couple of years atleast. But once these tools get worked into the SDK I think you will see them used to some extent in most games. But we are years away from that.
Nice try but none of the consoles you mentioned were similar in architecture therefore taking their GFLOPS numbers and coming to a conclusion is wrong.
PS4 and Xbox One on the other hand have the same architecture implemented in a different way.
This is even cuter but has nothing to do with the point.Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology
PS2 vs Xbox Difference
FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%
Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.
Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.
GameCube vs Xbox Difference
FLOPS........10.5Gflops........21.6Gflops.......... 100%
Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3
Conclusion:
Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.
This is even cuter but has nothing to do with the point.
Did the devs take advantage of the substantial extra power of the Xbox? Yes. That's it.
No, I don't have an Xbox and I don't know about the game.Have you played Splinter Cell: Chaos Theory on Xbox, for instance?
No, I don't have an Xbox and I don't know about the game.
But even if it's a crappy port it doesn't mean that they didn't take advantage of the better specs in several other multiplats.
But I was talking about the Xbox version when I said "crappy port".I can "see" that you don't have an Xbox. No offence.
Check this out: http://www.youtube.com/watch?v=HkHoDEh0FWc
And I think the PS2 version wasn't a crappy port, as I have both versions. It was a solid looking PS2 game. Xbox was just clearly more powerful machine. You can't deny that.
lol, don't worry.EDIT: LOL, I just noticed that I misread/messed up the quotes: "Did the devs take advantage of the substantial extra power of the Xbox? Yes." Yeah, exactly... Anyway I have to go to sleep and probably take some more English classes. You quoted a post that you pretty much agreed with, so I though you disagree that the Xbox was more powerful. Apologies.
You got it backwards buddy, the Gamecube architecture was older and more archaic, had no programmable shaders which would make the difference in performance even LARGER than the Flop number would have you believe since its missing functionality at the architecture level. Even worse for PS2 which was missing even more key architectural features like hardware bumpmapping. In contrast both PS4 and One are compliant with DX11 class hardware.
Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology
PS2 vs Xbox Difference
FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%
Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.
Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.
GameCube vs Xbox Difference
FLOPS........10.5Gflops........21.6Gflops.......... 100%
Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3
Conclusion:
Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.
Lets try not to get this thread locked...
PS4 is more powerful but you'll only see a difference in first party games probably.
3rd party publishers/developers should have absolutely no excuse as to why they can't get the most out of PS4's hardware in terms of graphics, resolutions, & framerates.
It's not exactly like the PS3 where it's hard to get the most out of the console due to the cell architecture.
Well the problem is that PC ports use the extra oomph usually in the form of brute force. I'll try to explain (though last time I did durante yelled at me... is it obvious that i'm afraid of durante?)
When designing a game for a PC you aren't building to a specific system. You're building to an insanely wide array of systems. If you're building a Xbox 360 game you know exactly what to expect - each dvd drive is the same speed, each harddrive is the same speed, each GPU and CPU is the same, each has the same amount of ram, Etc. You get it. On the PC, I personally have a fairly modest 3.6ghz i7, 650ti, 8gbs of ram. Super PC gamer X has a better processor, a $1000 gpu, 16gbs of ram, etc. And then casual PC gamer Y has a laptop with a 2.2ghz i5, a year old mobile processor and 4gbs of ram. A developer wants the game to run on all of these systems.
The issue is that since a PC developer needs to worry about gamer Y's laptop that Gamer X's supercomputer isn't programmed to specifically. Things are generally just scaled up. If you have the extra processor speed/better ram/better HD you'll load much faster - if you have the better GPU you can turn on more effects. The games obviously look much, much better on Gamer X's super computer... but they don't look nearly as good as they would if the developer said "fuck gamer Y, fuck mortimer, i'm making a game specifically for Gamer X's system" and worked to the strengths as the system as a whole instead of the brute force of it just being faster.
That's why the modest, in terms of PC specs, consoles will make great looking games. The developers will learn these systems and work specifically on what they do well. Meanwhile there may be amazing features in my 650TI that never, ever get used because there just isn't a reason for a PC developer to hone in on that one card.
It may sound like I'm shitting on PC gaming but I'm not. PCs will always produce the best looking games because of their ability to scale upwards. Even though your graphics card won't have every trick inside it exploited like a console - it will still produce incredible graphics while it's relevant. By the time the PS4 launches my current PC will be two years old and it will run multiplats like BF4 better than the PS4 will.
This is why Crysis on the PC was such an amazing looking game though. It actually targeted high end systems at the time. For years afterwards when you got a PC the question was "ok but how well does it run crysis?" They programmed to the strengths of high end PCs at the time and it stayed in the category of amazing looking game for years because of it.
But back to your question... this isn't anything you can brute force. This doesn't make the CPU or GPU more powerful. It will take time to and expertise to exploit it. Most people, including Sony PR/Cerny, think it will be a couple of years atleast. But once these tools get worked into the SDK I think you will see them used to some extent in most games. But we are years away from that.
Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology
PS2 vs Xbox Difference
FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%
Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.
Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.
GameCube vs Xbox Difference
FLOPS........10.5Gflops........21.6Gflops.......... 100%
Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3
Conclusion:
Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.
I agree completely. I think people are going to see a much wider divide in third party games this generation. People are used to only first party developers getting the most out of the hardware, but these boxes being far less specialized should make it significantly easier for developers to to get the most of out each platform. Not to mention development tools and engines have come a very long way, and things scale much more now than they ever did. People are kidding themselves if they think developers are just going to ignore 40% more GPU power and higher memory bandwidth.3rd party publishers/developers should have absolutely no excuse as to why they can't get the most out of PS4's hardware in terms of graphics, resolutions, & framerates.
It's not exactly like the PS3 where it's hard to get the most out of the console due to the cell architecture.
Comparison is not valid for one reason - different architectures yield different flops that aren't completely indicative of performance. You're comparing apples to oranges.Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology
PS2 vs Xbox Difference
FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%
Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.
Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.
GameCube vs Xbox Difference
FLOPS........10.5Gflops........21.6Gflops.......... 100%
Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3
Conclusion:
Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.
It doesn't help the CPU power in the traditional sense (like making it faster, for instance) but it helps with redundancies and this combined with GPU compute will allow the CPU to have more cycles to do what it does best - work on the more intense processes (like complex AI, physics, etc).
An incredibly simple example that will make Durante yell at me (which he probably will anyway) is using their water example above. Let's say we're making a jet ski game. Typically the water needs to be rendered in a straight line copying the data back and forth between the CPU and GPU. The CPU is saying "the jetski is making a wake" and the GPU draws the waves coming off of it. This system allows both to be done simultaneously without swapping back and forth - so it's happening in parallel instead of in a straight line.
So what does that mean? Well, if our jetski on a non-hUMA product takes 20% of the available resources to render the water and the real time waves created by the jetskis - it may now (and i'm pulling a number of my ass here, but the % isn't the point) may take 10%. So that's 10% "extra" that they have. Once you add everything... the AI, the lighting, all of the animations, the graphical effects, particles, etc... you have more overhead to add more because of the cycles you saved on the water rendering using hUMA. So the physics may be even more in depth... or the water particle effects may be EXTRA crazy.
So, yeah, it doesn't make the processor more powerful but it makes the entire system more efficient. Which in turns makes games better. So to the end user it will feel like more power - but it's not literally more power. Upclocking would *literally* be more power, but hUMA has nothing to do with that.
Comparison is not valid for one reason - different architectures yield different flops that aren't completely indicative of performance. You're comparing apples to oranges.
The PS4 and Xbox One have the same architecture. They are both based using AMD GPUs. The comparison is more direct.
Imagine if you have two PCs. Same CPU. Same RAM. One has a stronger GPU, one has a weaker GPU. That's the PS4 and Xbox in a nutshell. A highly simplistic comparison, but a there you go.
Yes, as I said, a highly simplistic comparisonExcept they don't have the same RAM.
So the PS4 does use Direct X? A custom version, but it is still direct X right?
Most games will be built on PC and then downscaled to ps4 and xbone. They will just have to be downscaled even further on xbone because of the weaker hardware. How much or how little difference it will make, I'm not qualified to say. But the difference will be there and it won't require any extra effort from devs.
For people who keep bringing up ps3/360 multiplats as a counter, saying "ps3 had more power and it didn't matter!", it actually works as more of a support if anything. The 360 had a superior GPU, more RAM to play with, and in some ways the more appropriate CPU. And almost all multiplatform games reflected those advantages.
I thought the PS3 and 360 were roughly evenly matched?Yep. The ps3 was more powerful but much harder to program for than the 360. The ps4 is more powerful and easier to program for than the xbox one. That's a pretty nice combo for Sony this time.
bunch of crap
Wheres Jeff when you need him?
I thought the PS3 and 360 were roughly evenly matched?
PS3: Better CPU, hard to program for. Weaker GPU. Split memory.
360: Better GPU. Easier to program. Unified memory
This reminds me of the time when microsoft fans accused sony fans of using nvidia's numbers against the 360 and how they were bullshit. See what I'm trying to say?
You have no clue what you're talking about. You're literally comparing apples to oranges. The hardware inside both consoles is pretty much the same, the ps4 just has more power and more "features". It's just reality, stop making bullshit arguments.
I hear what you're saying, but all I see is a Sackboy without a hat.The gpu was tweaked quite a bit so it wasn't all that much weaker (but yeah the 360 one is better but with less tweaks). The problem was that coding for the ps3 was all custom stuff that was a waste of time. This is why Gaben bitched about it back in the day... you had to learn specifically how to make games take advantage of the ps3 hardware and none of that work translated over to the 360 or pc. The cell was a brilliant chip that that was a complete waste of time to learn.... unless you were first party.
Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology
PS2 vs Xbox Difference
FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%
GameCube vs Xbox Difference
FLOPS........10.5Gflops........21.6Gflops.......... 100%
Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.
Every PS4 has a Cerny inside.