phosphor112
Banned
Yes, confirmed by Iwata.
IBM reps keep saying Power 7 based. "Leaks" points towards upgraded Broadway. is not clear what upgraded means in this context.
...why does it need 1gig for system?

Yes, confirmed by Iwata.
IBM reps keep saying Power 7 based. "Leaks" points towards upgraded Broadway. is not clear what upgraded means in this context.
...why does it need 1gig for system?![]()
https://mobile.twitter.com/Emi1yRogers/status/247097601151864832?p=v
This concerns me a bit, not going to lie.
https://mobile.twitter.com/Emi1yRogers/status/247097601151864832?p=v
This concerns me a bit, not going to lie.
It concerns me that Emily Rogers shit is still being posted on gaf.
https://mobile.twitter.com/Emi1yRogers/status/247097601151864832?p=v
This concerns me a bit, not going to lie.
It concerns me that Emily Rogers shit is still being posted on gaf.
https://mobile.twitter.com/Emi1yRogers/status/247097601151864832?p=v
This concerns me a bit, not going to lie.
It concerns me that Emily Rogers shit is still being posted on gaf.
Fair enough. Both the R700 and the 40nm moments are disputable (and the fact those wiki ratios are not really about shader FLOPs per se, despite the fact they're labeled as such), but for now we can take the '14GFLOPs/W' as a better-than-nothing estimate.That number is a complete hypothetical based on the GFLOP/TDP performance of the 700 line, easily shown here (I know.. wikipedia bla bla bla, but i lost my original source from ArsTechnica)
from the chart, the best performer is the 4770 (R740 core), at 12 GFLOP/Watt, at a TDP of 80W for a GFLOP performance of 960.
https://mobile.twitter.com/Emi1yRogers/status/247097601151864832?p=v
This concerns me a bit, not going to lie.
Wasn't she the one who wrote Ten years decline of Sony?
No, they don't. The only thing on record in that direction is "based on some of the same technologies". All it takes for this to be true is something like this, maybe:IBM reps keep saying Power 7 based.
Yes she is. A year or two ago she was given bad "insider information" and blogged about it, now she checks her sources and links to them directly, so she only posts stuff on the site when we can source it. She also wrote Raise of costs, fall of gaming. Which was our debut article for the site (only mention it because of how well sourced it is)
But now that I have mentioned it, I don't know if devs are going to find the budgets to bring people the next gen some of us are expecting, I mean Red Dead cost over 100m to make. Could you expect that to move into next gen with the fidelity everyone is expecting for the same price? and a lot of games already have to sell 3-5million copies just to break even at $60. What if PS4 and XB3 games cost $80? or even $100? So I'm not convinced these specs will really look far behind at all, especially with trailers like MGS:GZ coming out for the PS3. Wii U has a lot of tricks up it's sleeves that will push graphics further than we have seen with these 6+ year old consoles. The main one being more RAM and all of the modern effects you find on PC hardware.
It's an upper bound. A useful tool to make reasonable predictions.
Gemüsepizza;42186538 said:You are very wrong if you think next gen games are necessarily much more expensive than current games. Do you know what costs a lot of money? Optimization on old hardware. Faking visuals to achieve a certain look. That costs money. Assets are already produced in very high quality. The workflow will not change drastically. You have less limitations with new hardware, and if the new consoles from Sony and MS really are as similar as some rumors suggest, multiplatform games will be even cheaper to develop. And that's the problem with the Wii U: It may be simple to port current gen games to the Wii U, but it will be difficult and expensive to port next gen games to the Wii U. And Budgets won't make drastic jumps - they are getting bigger, but only because the gaming market grows.
This is not true, for instance... character models are only 10-30k polygons, but the next gen jump would easily allow 10x that number, so you could end up with character models being 300 or maybe even half a million polygons (if you want cut scene fidelity) And environments would also have a similar "improvement" buildings, cars, level design in general would all see a polygon increase with models, that takes time, and allows for more detail, which requires more artists.
Content creation also will take more time, and games might again get shorter... Development from generation to generation always goes up, and Microsoft has already mentioned "AAAA" titles I don't know how much clearer it has to become. Longer development cycles with bigger budgets and more people working on them, not every game can be AC3 (with ~600 developers working on some aspect of the game)
Also your last point is backwards, Wii U XB3 and PS4 are all much more similar in architecture then previous gens, in fact XB3 is closer to Wii U's architecture than PS4, considering PS4 is going x86 while XB3 and Wii U will be sticking with PowerPC chips, all three are using AMD GPUs, all custom, and all with their own API's... But they will still be built on the same technology and a lot of the GPU's hardware will be designed around patents from AMD.
A whole gigabyte for the OS is surprising. I can only imagine two reasons for that:
1) true multitasking for non-gaming applications, with 3rd parties being allowed to develop such applications (unlike the 3DS where only one MT application can run at once and they're all Nintendo developed.) Maybe media apps like TVii (which included Netflix and Hulu) can be run while the game is suspended.
No he is right, hi-res models are in the millions of polygons, that's what they use for the normal maps on the lower polygon models, for example the characters in Gears of War. Same for the buildings etc..
Nope, characters are already made with millions of polygons. Most of those polygons are baked into normal maps, however. And they will continue this way in the PS4 and Xbox8, because the only viable way to drastically increase polycounts over current gen consoles is by using tesselation.This is not true, for instance... character models are only 10-30k polygons, but the next gen jump would easily allow 10x that number, so you could end up with character models being 300 or maybe even half a million polygons (if you want cut scene fidelity) And environments would also have a similar "improvement" buildings, cars, level design in general would all see a polygon increase with models, that takes time, and allows for more detail, which requires more artists.
Wtf? It means the million-poly models artists already create as part of their current workflow can be respun to new runtime target polycounts with a few different parameters given to already existing tools. Amazing how completely opposite you go with your conclusions.If the consoles could push millions of polygons per character, this would be great but it's not possible to jump from ~30k polygons to the full hi-res models "millions of polygons" without pushing a similar increase in graphics processing and all rumors point under 2TFLOPs, which isn't even the 10X increase I graciously gave in my example.
This means new character models will be made and the hi-res models will be used just as they are now (as normal maps for the lower polygon models that sit at a fraction of those polygon numbers)
Nope, characters are already made with millions of polygons. Most of those polygons are baked into normal maps, however. And they will continue this way in the PS4 and Xbox8, because the only viable way to drastically increase polycounts over current gen consoles is by using tesselation.
With tesselation, models will still be exported with 10-30k polygons (or even less), but they'll use displacement maps instead of normal maps and shaders will reconstruct the polygons from the original million polygon model. This saves massive amounts of VRAM and bandwith (vertices are not weightless).
Wtf? It means the million-poly models artists already create as part of their current workflow can be respun to new runtime target polycounts with a few different parameters given to already existing tools. Amazing how completely opposite you go with your conclusions.
Tools that take arbitrarily detailed models and spit out not-so-detailed models plus normal maps to visually reconstruct the difference are a dime a dozen these days. This is HL2/Doom 3 era tech, and any modern modeling package supports it natively now.That is pretty much the conclusion I came up with as well? though I didn't know that the low polygon models were "respun" (do you mean they are produced automatically? that would in fact simplify it, it also would mean that Wii U could get a similar treatment, and have "auto-scaled" models, unless you were just "being cute" with words, and devs actually create those models manually)
Tools that take arbitrarily detailed models and spit out not-so-detailed models plus normal maps to visually reconstruct the difference are a dime a dozen these days. This is HL2/Doom 3 era tech, and any modern modeling package supports it natively now.
http://amber.rc.arizona.edu/lw/qemloss.html
http://amber.rc.arizona.edu/lw/normalmaps.html
Now, if you want to target a machine that can do 200k poly models comfortably, you just start again with the same super-detailed source model, and tell your tool of choice to bake you a 200k version (instead of the 30k versions you had it build for you before). You don't need to learn anything new at all.
The picture with tessellation is not as simple (artists do have to learn new skills), but we're steadily getting to the stage where tessellation-based asset pipelines are getting to a similar level of complexity as the old normal-map ones. Furthermore, normal maps are not leaving the scene this gen either - displacement + normal maps will likely be the norm.Tools that take arbitrarily detailed models and spit out not-so-detailed models plus normal maps to visually reconstruct the difference are a dime a dozen these days. This is HL2/Doom 3 era tech, and any modern modeling package supports it natively now.
http://amber.rc.arizona.edu/lw/qemloss.html
http://amber.rc.arizona.edu/lw/normalmaps.html
Now, if you want to target a machine that can do 200k poly models comfortably, you just start again with the same super-detailed source model, and tell your tool of choice to bake you a 200k version (instead of the 30k versions you had it build for you before). You don't need to learn anything new at all.
Anyone know if the WiiU support SDXC? Or is it just SDHC?
Tools that take arbitrarily detailed models and spit out not-so-detailed models plus normal maps to visually reconstruct the difference are a dime a dozen these days. This is HL2/Doom 3 era tech, and any modern modeling package supports it natively now.
http://amber.rc.arizona.edu/lw/qemloss.html
http://amber.rc.arizona.edu/lw/normalmaps.html
Now, if you want to target a machine that can do 200k poly models comfortably, you just start again with the same super-detailed source model, and tell your tool of choice to bake you a 200k version (instead of the 30k versions you had it build for you before). You don't need to learn anything new at all.
This is good to know. This tells me that even if the PS4 and Nextbox do outclass the WiiU it shouldn't be a problem at all when it comes to ports.The WiiU isn't in the same situation as the Wii was to where it just flat out couldn't do what the other systems were doing.
The picture with tessellation is not as simple (artists do have to learn new skills), but we're steadily getting to the stage where tessellation-based asset pipelines are getting to a similar level of complexity as the old normal-map ones. Furthermore, normal maps are not leaving the scene this gen either - displacement + normal maps will likely be the norm.
https://mobile.twitter.com/Emi1yRogers/status/247097601151864832?p=v
This concerns me a bit, not going to lie.
Especially since they was talking about how the Wii U can run their game at 1080P without breaking a sweat about a month ago.
& even crazier since their games are pretty casual.
https://mobile.twitter.com/Emi1yRogers/status/247097601151864832?p=v
This concerns me a bit, not going to lie.
Yeah it's only a co-founder's view after all (lol).
But I think he basically feels it will be another Wii like situation in a few years. Which is bad for some people, good for others.
It was about PS4/720 ports.
So they know their market? OK... Casual doesn't have to be a curse word.
The excuses won't be there.(atleast any valid ones)
The only way WiiU will become like Wii is if publishers want it to.
I know it was about the PS4/720 ports & I didn't say Casual was a curse word I just found it crazy that someone who make the type of games they make are saying that.
Most of the Wii excuses were pushing the boundaries of validity already.
Yes it is up to publishers but some publishers will never change, and already don't seem to put Wii U in the same class as unannounced consoles.
How can you say that when there are zero official reports on Sony's and Microsoft's next system out? If their systems are much more powerful the devs could just say that WiiU isn't powerful enough for their games and they would have a valid excuse. :/WiiU is in much better position architecture to be compatible with MS and Sony's next gen system and it is coming out first. The excuses won't be there.(atleast any valid ones)
How can you say that when there are zero official reports on Sony's and Microsoft's next system out? If their systems are much more powerful the devs could just say that WiiU isn't powerful enough for their games and they would have a valid excuse. :/
Personally I definitely think the Wii U could end up getting none or crappy/lazy PS4/720 ports in a few years. But hopefully not every great developer can afford billion dollar AAA movie style productions and they'll use Wii U instead.
It's also one thing that is abundantly clear without any source at all. Unless they had decided to go with an overclocked Wii GPU, literally every sane choice they could make is capable of general purpose computation.
COD is the wrong example as it's of the few games that (past 3) got actual ports, and good ones at that, while they were clearly underfunded and understaffed going by reports. Games like Ghostbusters are good examples although in some cases it was a blessing in disguise. But the guy is right in that we don't know if that's going to be the case with WiiU yet since we don't know anything solid about the other next gen systems. It's natural to assume that it's going to be much better than the Wii situation since you basically have the difference between DX10 and DX11 (as there's nothing newer coming up) level hardware rather than the gigantic architectural gap between Wii and the PS360 but performance wise the difference could still be too great to make real multiplatform efforts any more possible. It's like some beautiful looking games run and look great on lower end PCs because they were made with those in mind as well but others don't scale down nearly as well despite the obviously similar architecture. It remains to be seen if developers will need and if they do need then if they will care to do that effort to downscale to the WiiU.The thing with Wii U compared to Wii is that developers won't have to make new versions of their games the ground up for Wii U like they did for Wii. Take CoD for example. They had to have separate team entirely dedicated to the Wii version of the games because it had to be built for the Wii pretty much from the ground up to get it to work. Simply downscaling the resolution, textures, etc. wasn't enough.
The ability to scale down are assets has never been the barrier, other than the cost/benefit analysis of generating multiple detail levels of every single asset for the purpose of an additional port. But there will be games that, based on the hardware demands created by their design, will not be practical on the WiiU.
Exactly.But there's absolutely no reason for Iwata to specifically mention that capability if it wasn't an important feature to Nintendo. Seriously Iwata basically focused on only one aspect of the GPU, he mentioned nothing else about it. Yet people here are actually trying to argue that it was just some random thing he threw out there, its quite bizarre.