Xbox One SDK & Hardware Leak Analysis CPU, GPU, RAM & More - [Part One]

Read Your post and my answer ...
You said its the biggest difference this gen, which You meant by % of decreased resolution and i negated it by 'not visible'. Visible difference is not as big as % resolution difference.
And i even explained it later in the post ...

Play any game, where You can activate decent AA, on Your TV in 1080p, 900p, 720p, 640p and tell me that difference between 720p and 640p is bigger than 1080p and 900p.
Then the biggest benefit of a high end pc is lost aside framerate!
 
Read Your post and my answer ...
You said its the biggest difference this gen, which You meant by % of decreased resolution and i negated it by 'not visible'. Visible difference is not as big as % resolution difference.
And i even explained it later in the post ...

Play any game, where You can activate decent AA, on Your TV in 1080p, 900p, 720p, 640p and tell me that difference between 720p and 640p is bigger than 1080p and 900p.

For starters, I wrote 720p vs 1080p. And I've played Halo 3 with no AA (640p) and the jaggies were bad, but going to Halo 4 (720p) with FXAA still isn't as big of a jump as seeing COD Ghosts at 720p vs 1080p on my TV. It's a larger increase in pixels displayed and it is very noticeable. If you have a PS4 and XB1, go download the PES demos and switch back and forth while playing. You'll notice the XB1 version looks a lot worse. It's because it's 1080p vs 720p.
 
What benefits? There were no benefits on past gen, there are no benefits on this gen games either.
DX12 will further decrease bottlenecks from CPUs, making consoles even harder to catch up in terms of CPU disparity.

-----


But developers priorities will be pushing tech forward, not limiting themselves just to hit 1080p, its a given. It was always like that.

-----


I feel You totally :(
Hell, i've read posts like several months ago from people that they didnt want current gen to already launch, like wtf?! :(
lol, Smh.... Not trying to turn this into PC vs Console debate. My main point is that your assertion that sub 1080p games will be the norm on the PS4 later on in the Generation is a bit premature. Yes PC will get stronger every year, and you will see 4k gaming @ 60fps, but you don't need 64 ROPs for 1080p, you don't need 5+ Tflop GPU for 1080p@30fps. I grant that some multiplatform games will see sub 1080p resolution but I highly doubt PS4 exclusives will for the duration of the Generation, just looked at Drive Club and The Order, this generation is just getting started.

Edit: Also the narrative of console games are holding back PC will continue (partially true)
 
Then the biggest benefit of a high end pc is lost aside framerate!

What? Where was i talking about PC here? It was discussion between ps2/xbox, ps3/xbox360 and ps4/xbone difference in terms of power, which in this generation means mostly resolution.

----
you don't need 5+ Tflop GPU for 1080p@30fps.
Edit: Also the narrative of console games are holding back PC will continue (partially true)
Now You dont. In 2+ years 4.5-5tflops for full stable 1080p30 and High Settings is not unreasonable.

---
For starters, I wrote 720p vs 1080p. And I've played Halo 3 with no AA (640p) and the jaggies were bad, but going to Halo 4 (720p) with FXAA still isn't as big of a jump as seeing COD Ghosts at 720p vs 1080p on my TV. It's a larger increase in pixels displayed and it is very noticeable. If you have a PS4 and XB1, go download the PES demos and switch back and forth while playing. You'll notice the XB1 version looks a lot worse. It's because it's 1080p vs 720p.
1080p vs 720p is more of an anomaly than standard for this gen. It will be either 720p vs 900p or 900p vs 1080p.
 
Ray 'Charles' Maker.

As we've already seen this generation, some multiplatform games will be close on the two (main) systems, and some will have massive differences, and that's also how it's been for the past 20 years. You can't (well, apparently you can) make a blanket statement like that and not expect to be disagreed with, so you got that right.

Massive difference, lol, there's only been 3 games that are 720p on the x1 and tomb raider was locked 30.

Ray Charles? What the hell?
 
The fact remains that after only 1 year MS do appear to be scrapping the bottom of the barrel for any crumb of power available, and it already has an impact on games' stability (cf the inability of Devs to know if they have 50% or 80% of the 7th core available at any given time).

As the other stuff got already corrected, I will reply to this:
The wording is interesting - when would you expect MS to do this if not asap? And what is this about "stability"?! You mean if achievement notifications or voice commands could steal you some frames? Devs always have 50% more and it's up to them if they decide to take even more, at a (small) risk. At least MS has the ability to free more power, which for example didn't happen last gen. The 3 os design enables them to do so. To me it's an elegant way to circumvent any potential problems with a "default" scheduler.
I don't know why you are trying to tell such a negative story.
 
I feel You totally :(
Hell, i've read posts like several months ago from people that they didnt want current gen to already launch, like wtf?! :(

Tbh i would have liked for current gen to start in 2015 if it meant a device that
could make use of GCN2.0 and 2.5~3D stacked memory and more Tflops so current
gen was more future proof.
 
What happened with the PS4's secondary ARM processor ?

Also do these consoles have anything similar to AMD Turbo Core?

The ARM processor is only used for the connected standby as far as I remember. So it's handling background downloads and so on. I have not come across any information yet that would make use of that at all while the main OS is running (likely it would interfere with hard disk accesses).

Dynamic overclocking does not seem to be the case. The systems will likely throttle the CPU cores in case they are not utilized (e.g. no game is running) but they don't exceed the maximum frequency. Given that those boosts mainly make sense for badly parallelized software as the performance of few cores is increased while others aren't utilized (so spreading the thermal budget differently) and that the software on consoles is tailored to the hardware it likely would lead to more issues (predictability) than benefits.
 
The ARM processor is only used for the connected standby as far as I remember. So it's handling background downloads and so on. I have not come across any information yet that would make use of that at all while the main OS is running (likely it would interfere with hard disk accesses).

This is what Sony was also planning for the ARM processor but it doesn't work until now. If the system is in standby and issue a download to PS4 the whole system goes up to download.
But the arm handles the video capturing, afaik.
 
Why could MS and Sony not just basically launch gaming laptops with 780m GPU and i5/i7 CPU's. A laptop is small form factor and the best ones around are more powerful than Xbox 1 and PS4. Why did Sony and MS go with such weak architectures. Would a 780m not have been a much better GPU choice than a custom AMD mid range GPU chip
 
Why could MS and Sony not just basically launch gaming laptops with 780m GPU and i5/i7 CPU's. A laptop is small form factor and the best ones around are more powerful than Xbox 1 and PS4. Why did Sony and MS go with such weak architectures. Would a 780m not have been a much better GPU choice than a custom AMD mid range GPU chip

Main reason is cost, 780m and i5/i7 would not come cheap. Sony are selling every PS4 they can produce, they made the right call.
 
anyone post this article yet? someone on another forum said phil just retweeted it

but take it as you will
http://www.littletinyfrogs.com/article/460524/DirectX_11_vs_DirectX_12_oversimplified

DirectX 11 vs. DirectX 12 oversimplified

This article is an extreme oversimplification


Your CPU and your GPU

Since the start of the PC, we have had the PC and the GPU (or at least, the “video card”).

Up until DirectX 9, the CPU, being 1 core in those days, would talk to the GPU through the “main” thread.

DirectX 10 improved things a bit by allowing multiple cores send jobs to the GPU. This was nice but the pipeline to the GPU was still serialized. Thus, you still ended up with 1 CPU core talking to 1 GPU core.

It’s not about getting close to the hardware

Every time I hear someone say “but X allows you to get close to the hardware” I want to shake them. None of this has to do with getting close to the hardware. It’s all about the cores. Getting “closer” to the hardware is relatively meaningless at this point. It’s almost as bad as those people who think we should be injecting assembly language into our source code. We’re way beyond that.




It’s all about the cores

Last Fall, Nvidia released the Geforce GTX 970. It has 5.2 BILLION transistors on it. It already supports DirectX 12. Right now. It has thousands of cores in it. And with DirectX 11, I can talk to exactly 1 of them at a time.

Meanwhile, your PC might have 4, 8 or more CPU cores on it. And exactly 1 of them at a time can talk to the GPU.

Let’s take a pause here. I want you to think about that for a moment. Think about how limiting that is. Think about how limiting that has been for game developers. How long has your computer been multi-core?

But DirectX 12? In theory, all your cores can talk to the GPU simultaneously. Mantle already does this and the results are spectacular. In fact, most benchmarks that have been talked about have been understated because they seem unbelievable. I’m been part of (non-NDA) meetings where we’ve discussed having to low-ball performance gains to being “only” 40%. The reality is, as in, the real-world, non-benchmark results I’ve seen from Mantle (and presumable DirectX 12 when it’s ready) are far beyond this. The reasons are obvious.

To to summarize:

DirectX 11: Your CPU communicates to the GPU 1 core to 1 core at a time. It is still a big boost over DirectX 9 where only 1 dedicated thread was allowed to talk to the GPU but it’s still only scratching the surface.

DirectX 12: Every core can talk to the GPU at the same time and, depending on the driver, I could theoretically start taking control and talking to all those cores.

That’s basically the difference. Oversimplified to be sure but it’s why everyone is so excited about this.

The GPU wars will really take off as each vendor will now be able to come up with some amazing tools to offload work onto GPUs.


Not just about games

Cloud computing is, ironically, going to be the biggest beneficiary of DirectX 12. That sounds unintuitive but the fact is, there’s nothing stopping a DirectX 12 enabled machine from fully running VMs on these video cards. Ask your IT manager which they’d rather do? Pop in a new video card or replace the whole box. Right now, this isn’t doable because cloud services don’t even have video cards in them typically (I’m looking at you Azure. I can’t use you for offloading Metamaps!)

It’s not magic

DirectX 12 won’t make your PC or XBox One magically faster.

First off, the developer has to write their game so that they’re interacting with the GPU through multiple cores simultaneously. Most games, even today, are still written so that only 1 core is dedicated to interacting with the GPU.

Second, this only benefits you if your game is CPU bound. Most games are. In fact, I’m not sure I’ve ever seen a modern Nvidia card get GPU bound (if anyone can think of an example, please leave it in the comments).

Third, if you’re a XBox One fan, don’t assume this will give the XBO superiority. By the time games come out that use this, you can be assured that Sony will have an answer.

Rapid adoption

There is no doubt in my mind that support for Mantle/DirectX12/xxxx will be rapid because the benefits are both obvious and easy to explain, even to non-technical people. Giving a presentation on the power of Oxide’s new Nitrous 3D engine is easy thanks to the demos but it’s even easier because it’s obvious why it’s so much more capable than anything out there.

If I am making a game that needs thousands of movie-level CGI elements on today’s hardware, I need to be able to walk a non-technical person through what Nitrous is doing differently. The first game to use it should be announced before GDC and in theory, will be the very first native DirectX 12 and Mantle and xxxx game (i.e. written from scratch for those platforms).


read the article for the rest





added this

GDC

Pay very very close attention to GDC this year. Even if you’re an OpenGL fan. NVidia, AMD, Microsoft, Intel and Sony have a unified goal. Something is about to happen. Something wonderful.
 
Why could MS and Sony not just basically launch gaming laptops with 780m GPU and i5/i7 CPU's.

Cost as has been stated already. But it's not only due to less powerful components costing less. The fact that AMD can produce a single chip that contains everything from the memory controller to the CPU and GPU is quite a big cost saver as well. You don't need to assemble different chips and you don't need to have to implement the busses to connect them.

AMD is in a rather unique position there. Nvidia doesn't have a CPU yet that is fast enough even though they are getting closer. Intel is lacking on the graphics side (and Intel chips are quite expensive in the end). Only AMD offered a CPU/GPU solution on a single chip at the desired performance.
 
Think about this for a moment: In every single spec Ps3 was superior than 360. Even on the gpu side it had more flops. And in terms of flops (at least in theory) cell was in the same ballpark as RSX and Xenos, while the 360's processor was way behind. Half of it's memory was the same as 360 and the other half was faster.
.

74.8 billion shader operations per second (24 Pixel Shader Pipelines*5 ALUs*550 MHz) + (8 Vertex Shader Pipelines*2 ALUs*550 MHz)
192 GFLOPS for RSX, and only then with a perfect combination of pixel and vertex work

96 billion shader operations per second (3 shader pipelines × 16 processors × 4 ALUs × 500 MHz)
240 GFLOPS for Xenos that could be applied to either pixel or vertex in any ratio.


Sonys Flop ratings were smoking the wacky tabacy.

Full specs
500 MHz parent GPU on 90 nm, 65 nm (since 2008) TSMC process or 45nm GlobalFoundries process (since 2010, with CPU on same die) of total 232 million transistors
48 vector units floating-point vector processors for shader execution, divided in three dynamically scheduled SIMD groups of 16 processors each.[2]
Unified shading architecture (each pipeline is capable of running either pixel or vertex shaders)
10 FP ops per vector processor per cycle (5 fused multiply-add)
Maximum vertex count: 6 billion vertices per second ( (48 shader vector processors × 2 ops per cycle × 500 MHz) / 8 vector ops per vertex) for simple transformed and lit polygons
Maximum polygon count: 500 million triangles per second[2]
Maximum shader operations: 96 billion shader operations per second (3 shader pipelines × 16 processors × 4 ALUs × 500 MHz)
240 GFLOPS
MEMEXPORT shader function
16 texture filtering units (TF) and 16 texture addressing units (TA)
16 filtered samples per clock
Maximum texel fillrate: 8 gigatexels per second (16 textures × 500 MHz)
16 unfiltered texture samples per clock
Maximum dot product operations: 24 billion per second
Support for a superset of DirectX 9.0c API DirectX Xbox 360, and Shader Model 3.0+
500 MHz, 10 MB daughter embedded DRAM (at 256Gbit/s) framebuffer on 90 nm, 80 nm (since 2008 [3]) or 65nm (since 2010 [4]).
NEC designed eDRAM die includes additional logic (192 parallel pixel processors) for color, alpha compositing, Z/stencil buffering, and anti-aliasing called “Intelligent Memory”, giving developers 4-sample anti-aliasing at very little performance cost.
105 million transistors [5]
8 render output units
Maximum pixel fillrate: 16 gigasamples per second fillrate using 4X multisample anti aliasing (MSAA), or 32 gigasamples using Z-only operation; 4 gigapixels per second without MSAA (8 ROPs × 500 MHz)
Maximum Z sample rate: 8 gigasamples per second (2 Z samples × 8 ROPs × 500 MHz), 32 gigasamples per second using 4X anti aliasing (2 Z samples × 8 ROPs × 4X AA × 500 MHz)[1]
Maximum anti-aliasing sample rate: 16 gigasamples per second (4 AA samples × 8 ROPs × 500 MHz)[1]

Based on G70 Chip in turn based on the NV47 (GeForce 7800 GTX) but with only 8 ROPs activated and 128 Bit memory interface
500 MHz on 90 nm process (shrunk to 65 nm in 2008[4] and to 40 nm in 2010[5])
300+ million transistors
Multi-way programmable parallel floating-point shader pipelines
Independent pixel/vertex shader architecture
24 parallel pixel-shader ALU pipes clocked @ 550 MHz
5 ALU operations per pipeline, per cycle (2 vector4, 2 scalar/dual/co-issue and fog ALU, 1 Texture ALU)[citation needed]
10 floating-point operations per pipeline, per cycle[6]
8 parallel vertex-shader pipelines @550 MHz
2 ALU operations per pipeline, per cycle (1 vector4 and 1 scalar, dual issue)[citation needed]
10 floating-point operations per pipeline, per cycle[citation needed]
Floating Point Operations: 192 GFLOPS [7]
74.8 billion shader operations per second (24 Pixel Shader Pipelines*5 ALUs*550 MHz) + (8 Vertex Shader Pipelines*2 ALUs*550 MHz)
24 texture filtering units (TF) and 8 vertex texture addressing units (TA)
24 filtered samples per clock
Maximum texel fillrate: 13.2 GigaTexels per second (24 textures * 550 MHz)
32 unfiltered texture samples per clock, ( 8 TA x 4 texture samples )
8 render output units / pixel rendering pipelines
Peak pixel fillrate (theoretical): 4.4 Gigapixel per second
Maximum Z sample rate: 8.8 GigaSamples per second (2 Z-samples * 8 ROPs * 550 MHz)
Maximum Dot product operations: 56 billion per second (combined with Cell CPU)
128-bit pixel precision offers rendering of scenes with High dynamic range rendering (HDR)
256 MB GDDR3 RAM at 700 MHz
128-bit memory bus width
22.4 GB/s read and write bandwidth
Cell FlexIO bus interface
20 GB/s read to the Cell and XDR memory
15 GB/s write to the Cell and XDR memory
Support for PSGL (OpenGL ES 1.1 + Nvidia Cg)
Support for S3TC texture compression [8]

http://en.wikipedia.org/wiki/RSX_'Reality_Synthesizer'

http://en.wikipedia.org/wiki/Xenos_(graphics_chip)
 
Top Bottom