Next Xbox Reveal Set For May 21, 10:00 A.M. PST, Livestreamed

Status
Not open for further replies.
They allowed the original developers to have the IPs, wish they would make a deal for a exclusive Mech Assault game on console considering it's not being used now.

They could use the current Mech Warrior engine, it would be awesome.

If anyone here hasn't tried out MechWarrior Online I highly recommend it.

Well, at least that's something. Still, incredibly short sighted and pretty stupid of MS not to lock down a Mech Assault game.

I doubt we'll be seeing another Chrome Hounds next gen. Titan might scratch that itch, but it's got a lot to live up to.
 
You know what would be a fucking megaton?

Solid state drives. Honestly I wouldn't even care if they drove the price up. Long term, it would be fucking awesome.

A combination of extra ram and ssd would basically all but eradicate loading times. Ooooooooohhqhhhh yeah.

I've long advocated an "ultra elite" or something SKU, with an SSD, for consoles....

I wouldn't want to burden the normal mainstream SKU's with such a high cost...hell you might be better off chunking in more and more DDR3 RAM than that.
 
You know what would be a fucking megaton?

Solid state drives. Honestly I wouldn't even care if they drove the price up. Long term, it would be fucking awesome.

A combination of extra ram and ssd would basically all but eradicate loading times. Ooooooooohhqhhhh yeah.

I could see a small amount for parts of the OS and so on but I wouldn't expect any for usable storage.
 
Let's just say : the Meltdowns will write history.

iYxTSyzMCxjQg.gif
 
No Crackdown 3 makes me really sad.Crackdown 1 is one of my favorite new IPs in this generation and I would buy new Xbox for only this one game (ok maybe not really but add to this at least one more interesting title and here I am day one).
I really hope that we will see some titles that were great but somehow lost in the middle of this generation.
 
I've long advocated an "ultra elite" or something SKU, with an SSD, for consoles....

I wouldn't want to burden the normal mainstream SKU's with such a high cost...hell you might be better off chunking in more and more DDR3 RAM than that.

Yeah I would definitely go for that. Who knows? :)
 
Let's just say : the Meltdowns will write history.


The point I keep coming back to is that, unless you have a format bias, you can no longer lose

1 - if Xbox is some how 'better' than PS4.... amazing... and amazing meltdowns on Gaf

2 - if it isn't then... hell... who cares... PS4 looks like it will deliver the goods... and there will be a whole other set of hilarious meltdowns.


Either way, we get great new tech and Gaf history will be made.

As a Halo fan though... I am buying Durango... so all I need to know is how good the damn thing is... i.e. which one of the two will be my main home console.

PS4 does look DELICIOUS though... so if Durango is the same or better then, damn, this is going to be a good time to be a gamer.


WHOOOOOP!


<I've had too much coffee, haven't I>
 
Far too much talk of 'crow', etc. Never seen the point personally, you're either right or you're wrong. Does it really matter in the long run? Those who argued it would suck won't be moved or have their opinion changed by whatever is revealed, they'll simply find something else to complain about.

Let's just enjoy what might be the last traditional console reveal we'll see. No need for this GFAQ level of nonsense with crow eating and the like.
 
Far too much talk of 'crow', etc. Never seen the point personally, you're either right or you're wrong. Does it really matter in the long run? Those who argued it would suck won't be moved or have their opinion changed by whatever is revealed, they'll simply find something else to complain about.

Let's just enjoy what might be the last traditional console reveal we'll see. No need for this GFAQ level of nonsense with crow eating and the like.

Nothing wrong with keeping expectations in check... But some have been lowering expectations a bit too extremely.

You can bank on moving goal posts after the reveal.

I'm going to feel so vindicated when they show off a 3.0TF GPU monster with a raytracing chip despite the fact I'm completely joking.

I think any hint that Durango and PS4 are in virtually the same ballpark of power will cause meltdowns. If and when that happens.
 
Far too much talk of 'crow', etc. Never seen the point personally, you're either right or you're wrong. Does it really matter in the long run? Those who argued it would suck won't be moved or have their opinion changed by whatever is revealed, they'll simply find something else to complain about.

Let's just enjoy what might be the last traditional console reveal we'll see. No need for this GFAQ level of nonsense with crow eating and the like.

Not at all, but it lads to some fucking funny forum posts.


I remember crying with laughter at the 'Letter to Square Enix' after FFXIII was announced for 360. That shit was priceless.

Has anyone got it?
 
i must admitt that im getting lost in the shader cores number of threads ...thing......

One compute unit in GCN has 4 Vector unit each has 16 processing elements.
those processing elements are the shader cores.

Try AMD OpenCL programmers Guide it really cleared some things up for me when i had my interm. I could have swapped some of the terms has been like 3~4 months when i last saw into it.
 
One compute unit in GCN has 4 Vector unit each has 16 processing elements.
those processing elements are the shader cores.

Try AMD OpenCL programmers Guide it really cleared some things up for me when i had my interm. I could have swapped some of the terms has been like 3~4 months when i last saw into it.

this is what vgleaks said (the thing that i dont understand)

"Shader cores
12
Instruction issue rate
12 SCs * 4 SIMDs * 16 threads/clock = 768 ops/clock
FLOPs
768 ops/clock * (1 mul + 1 add) * 800 MHz = 1.2 TFLOPS"

L1 Cache
"64-way L1 cache of 16 KB, composed of 256 64-byte cache lines"
(so 256 long)

" SIMD executes a vector instruction on 64 threads at once in lockstep."

"On each clock cycle, the scheduler considers one of the four SIMDs, iterating over them in a round-robin fashion. Most instructions have a four cycle throughput, so each SIMD only needs attention once every four clocks."


on vgleaks they using the formula 12*4*16 threads clock
12 stand per shader core .....4 r the simds..in every shader core and 16 the threads they should do x clock
but why 16?!?!? if they saying that a simd execture 64 threads/clock


and why on gcn the l1 isnt 64way and isnt long 256 but 64

pls if u have an answer help me


is vgleaks txt wrong? is their math formula wrong?

i know that misterx is talkign about this too and im NOT supporting his theories but this formula of vgleaks sound crazy for real or not?
 
This depends on your definition of a ball park I guess, is a ball park 20%? 30%? 50%?

Ball park to me is same feature set. For some reason people keep expecting a Xbox/PS2 difference because of the slight GPU advantage with PS4. They're going to be disappointed severely and I will enjoy making fun of them on a regular basis until I get banned for doing such. You know how cyberbullying is a no no and all that jazz
 
What the hell. Quantum was terrible. Bloodwake, however. Give it to me, NOW!

quantum wasn't bad.
I would like a sequel, but more than that I would like some of the diversity original Xbox exclusives had.

That and futuristic racers need to make a come back, the possibilities with next Gen would be huge. Doubt we'll ever see full price products again though.
 
this is what vgleaks said (the thing that i dont understand)

"Shader cores
12
Instruction issue rate
12 SCs * 4 SIMDs * 16 threads/clock = 768 ops/clock
FLOPs
768 ops/clock * (1 mul + 1 add) * 800 MHz = 1.2 TFLOPS"

L1 Cache
"64-way L1 cache of 16 KB, composed of 256 64-byte cache lines"
(so 256 long)

" SIMD executes a vector instruction on 64 threads at once in lockstep."

"On each clock cycle, the scheduler considers one of the four SIMDs, iterating over them in a round-robin fashion. Most instructions have a four cycle throughput, so each SIMD only needs attention once every four clocks."


on vgleaks they using the formula 12*4*16 threads clock
12 stand per shader core .....4 r the simds..in every shader core and 16 the threads they should do x clock
but why 16?!?!? if they saying that a simd execture 64 threads/clock


and why on gcn the l1 isnt 64way and isnt long 256 but 64

pls if u have an answer help me


is vgleaks txt wrong? is their math formula wrong?

They mean a CU for the 64 threads bit, each CU has 64 threads because each has 4 SIMD units which are 16 way each.

As per the GCN white paper.

In GCN, each CU includes 4 separate SIMD units for vector processing. Each of these SIMD units simultaneously executes a single operation across 16 work
items,
but each can be working on a separate wavefront. This places emphasis on finding many wavefronts to be processed in parallel, rather than relying on
the compiler to find independent operations within a single wavefront.

http://www.amd.com/la/Documents/GCN_Architecture_whitepaper.pdf
 
quantum wasn't bad.
I would like a sequel, but more than that I would like some of the diversity original Xbox exclusives had.

I think there was as much diversity among Xbox 360 exclusives, if not more, and I expect the same levels of diversity for the next Xbox.

I was never big on futuristic racers except for Rollcage. Sony bringing that back would be amazing, but that's a discussion for another thread.
 
One compute unit in GCN has 4 Vector unit each has 16 processing elements.
those processing elements are the shader cores.

Try AMD OpenCL programmers Guide it really cleared some things up for me when i had my interm. I could have swapped some of the terms has been like 3~4 months when i last saw into it.

They mean a CU for the 64 threads bit, each CU has 64 threads because each has 4 SIMD units which are 16 way each.

As per the GCN white paper.
In GCN, each CU includes 4 separate SIMD units for vector processing. Each of these SIMD units simultaneously executes a single operation across 16 work
items,
but each can be working on a separate wavefront. This places emphasis on finding many wavefronts to be processed in parallel, rather than relying on
the compiler to find independent operations within a single wavefront.[/quote]

http://www.amd.com/la/Documents/GCN_Architecture_whitepaper.pdf[/QUOTE]

but this is not what vgleaks saying kidbeta

"Each of the four SIMDs in the shader core is a vector processor in the sense of operating on vectors of threads. A SIMD executes a vector instruction on 64 threads at once in lockstep."


this should means "A (1) simd execute a single operation across 64 work items" using white paper terms
 
Nothing wrong with keeping expectations in check... But some have been lowering expectations a bit too extremely.

You can bank on moving goal posts after the reveal.

I fully expect moving goal posts, it's why I don't see the point in this crow eating chatter. Those who aren't interested will find any reason to shit up a thread or complain.

Not at all, but it lads to some fucking funny forum posts.

I remember crying with laughter at the 'Letter to Square Enix' after FFXIII was announced for 360. That shit was priceless.

Has anyone got it?

Indeed it does, I won't argue against it providing some of the funniest forum posts/moments I've come across on here and other forums. I just find the talk of 'crow' eating juvenile in some respects. It feels like I've been transported back in time 7 years.

FFXII was probably the most amazing time to be part of a forum, so many meltdowns, anger, bitterness. It was an insane time.

http://www.youtube.com/watch?v=3WZiOS5DMMc
 
In GCN, each CU includes 4 separate SIMD units for vector processing. Each of these SIMD units simultaneously executes a single operation across 16 work
items,
but each can be working on a separate wavefront. This places emphasis on finding many wavefronts to be processed in parallel, rather than relying on
the compiler to find independent operations within a single wavefront.

http://www.amd.com/la/Documents/GCN_Architecture_whitepaper.pdf
but this is not what vgleaks saying kidbeta

"Each of the four SIMDs in the shader core is a vector processor in the sense of operating on vectors of threads. A SIMD executes a vector instruction on 64 threads at once in lockstep."


this should means "A (1) simd execute a single operation across 64 work items" using white paper terms

Well they got it wrong because otherwise the SIMD's in Durangos GPU are 1/4 as powerful las the ones in Orbis, your choice.
 
http://www.amd.com/la/Documents/GCN_Architecture_whitepaper.pdf

but this is not what vgleaks saying kidbeta

"Each of the four SIMDs in the shader core is a vector processor in the sense of operating on vectors of threads. A SIMD executes a vector instruction on 64 threads at once in lockstep."


this should means "A (1) simd execute a single operation across 64 work items" using white paper terms

Well they got it wrong because otherwise the SIMD's in Durangos GPU are 1/4 as powerful las the ones in Orbis, your choice.[/QUOTE]

?!?!?!

they using the formula 12*4*16
12 sc (cu)
4 simds
16 threads/clk


but using their txt is

12sc
4simds
64 threads/clk

how this could be 1/4 powerful less than orbis?!?!?!

so vgleaks durango leaks is wrong?

it dont talk about CU (shader core) being 64 threads...it talk about A single simd being 64 threads
and infact every SC (or cu) have L1 long 254 and not 64 (as gcn) in durango

isnt this crazy?
 
Well they got it wrong because otherwise the SIMD's in Durangos GPU are 1/4 as powerful las the ones in Orbis, your choice.

?!?!?!

they using the formula 12*4*16
12 sc (cu)
4 simds
16 threads/clk


but using their txt is

12sc
4simds
64 threads/clk

how this could be 1/4 powerfull than orbis?!?!?!


Less TFLOPS but 4x the SIMD according to you.

So either they got the terminology wrong (which is the most likely thing as the document they are basing it off says the exact same thing as the GCN docs but in other words).

you may also realise if you think about it that the numbers you are quoting are the same if you assume that the first one shows you the break down of threads per SIMD and the second one shows you the break down of threads per CU.
 
Less TFLOPS but 4x the SIMD according to you.

So either they got the terminology wrong (which is the most likely thing as the document they are basing it off says the exact same thing as the GCN docs but in other words).

you may also realise if you think about it that the numbers you are quoting are the same if you assume that the first one shows you the break down of threads per SIMD and the second one shows you the break down of threads per CU.



they wrote on vgleaks that 1 (one of the 4 that r in the Shader core or compute unit) simd execute 1 op on 64 threads
but at the same time they used a different formula to calculate the ops/clock

12shader cores * 4 simds ( in every shader core) * 16 threads (i dont know where they toke this seen that they sayng every simd of the 4 can do 64 threads and not 16)

plus the L1 is different from the GCN and is long 256 not 64

if u do the formula using what they saying about the threads

is 12*4*64 =3072ops/clock!?!?!?!?!
 
Well they got it wrong because otherwise the SIMD's in Durangos GPU are 1/4 as powerful las the ones in Orbis, your choice.

?!?!?!

they using the formula 12*4*16
12 sc (cu)
4 simds
16 threads/clk


but using their txt is

12sc
4simds
64 threads/clk

how this could be 1/4 powerful less than orbis?!?!?!

so vgleaks durango leaks is wrong?

it dont talk about CU (shader core) being 64 threads...it talk about A single simd being 64 threads
and infact every SC (or cu) have L1 long 254 and not 64 (as gcn) in durango

isnt this crazy?

Why does any of this matter?

If all you care about is GPU cores... buy a PC.


None of this will be the least bit relevant once we simply see the games.

On paper 360 would 'never' keep up with PS3 / CELL... in reality....?
 
Why does any of this matter?

If all you care about is GPU cores... buy a PC.


None of this will be the least bit relevant once we simply see the games.

On paper 360 would 'never' keep up with PS3 / CELL... in reality....?


im just saying that vgleaks txt ...r wrong

or their formula result r wrong....

and everyone is basing on wrong txt or correct txt and wrong math formula

im not interested only in gpy cores :D but is also interesting knowing what u got under the hood no?
 
Why does any of this matter?

If all you care about is GPU cores... buy a PC.


None of this will be the least bit relevant once we simply see the games.

On paper 360 would 'never' keep up with PS3 / CELL... in reality....?

on paper, both companies lied saying they were 1+ tflop consoles lol.

and it matters to me because it's damn 2013. This gen is 8 years old. The last thing I want to see is these damn consoles with hardware from 2 years ago.

I want my games to all run at 1080p 60fps(which we already know we aren't getting for most games if the ps4 is anything to go off by) and then drop down to 1080p 30fps for 2nd/3rd gen games. I don't want to be playing 720p games in freaking 2020 lol.
 
on paper, both companies lied saying they were 1+ tflop consoles lol.

and it matters to me because it's damn 2013. This gen is 8 years old. The last thing I want to see is these damn consoles with hardware from 2 years ago.

I want my games to all run at 1080p 60fps(which we already know we aren't getting for most games if the ps4 is anything to go off by) and then drop down to 1080p 30fps for 2nd/3rd gen games. I don't want to be playing 720p games in freaking 2020 lol.

The technology is going to be cutting edge but the power won't because of price.
 
on paper, both companies lied saying they were 1+ tflop consoles lol.

and it matters to me because it's damn 2013. This gen is 8 years old. The last thing I want to see is these damn consoles with hardware from 2 years ago.

I want my games to all run at 1080p 60fps(which we already know we aren't getting for most games if the ps4 is anything to go off by) and then drop down to 1080p 30fps for 2nd/3rd gen games. I don't want to be playing 720p games in freaking 2020 lol.

If resolution and frame rate are that important - buy a PC?

The value that consoles add is in exclusives experiences and services... the technology will be out of date before it even goes on sale.

The discussion about power is just a nonsense IMO.

It will come down to the most compelling games and best package of services... at the right price.
 
they wrote on vgleaks that 1 (one of the 4 that r in the Shader core or compute unit) simd execute 1 op on 64 threads
but at the same time they used a different formula to calculate the ops/clock

12shader cores * 4 simds ( in every shader core) * 16 threads (i dont know where they toke this seen that they sayng every simd of the 4 can do 64 threads and not 16)

plus the L1 is different from the GCN and is long 256 not 64

if u do the formula using what they saying about the threads

is 12*4*64 =3072ops/clock!?!?!?!?!

Because they made a mistake in part of it, it is standard GCN and that is obvious to anyone, thats why, it is not 3072ops/clock and we have been over this multiple times on the forum.

Also you got it waaaaaya wrong.

The L2 is 512kbit (shared between all the CU's) the L1 is 16kbit per CU.

As per this link.

http://www.vgleaks.com/durango-gpu-2/2/

As per the same link under 'compute' you'll see that each SIMD executes a array of 16 threads. A SIMD vector could be albeit bad terminology interpreted as a vector of SIMD units.
 
If resolution and frame rate are that important - buy a PC?

The value that consoles add is in exclusives experiences and services... the technology will be out of date before it even goes on sale.

The discussion about power is just a nonsense IMO.

It will come down to the most compelling games and best package of services... at the right price.

what's wrong with wanting a total package?

You can't freaking deny what I said is incorrect. It will be downright pitiful if these consoles have to drop back down to 1280x720p to run games.

Couple with the fact that we are reaching the point in time where people will be upgrading their TV's again, the last thing I want is a damn console that can't do full hd a few years from now.
 
If resolution and frame rate are that important - buy a PC?

The value that consoles add is in exclusives experiences and services... the technology will be out of date before it even goes on sale.

The discussion about power is just a nonsense IMO.

It will come down to the most compelling games and best package of services... at the right price.
But to me, that's where the PS4 reveal failed to impress me. The hardware specs were impressive, the first party line-up announced just seemed terribly predictable and uninspired. Knack saved it for me.
 
Hey... even WiiU got a clock increase with a FW update:
http://www.vg247.com/2013/05/10/wii-u-latest-system-update-has-increased-cpu-gpu-speed-rumour/

Their post reads:

CPU: IBM PowerPC 7xx-based tri-core processor “Espresso” clocked at 1.24 GHz before the 3.0.0 update, 3.24 GHz after the 3.0.0 update. This is an evolution to the Broadway chip used in the Wii, is 64 bit and uses Power6 technote When IBM has said that Nintendo has licensed the Power7 tech from IBM, Nintendo is not using it for the Wii U, explaining its backwards compatibility.

GPU: AMD Radeon High Definition processor codenamed “Latte” with an eDRAM cache built onto the die clocked at 550 MHz before the 3.0.0 update, 800 MHz after the 3.0.0 update.

So following the 3.0.0 update, it is claimed that both the CPU and GPU have seen a clock speed increase. Do you think this is likely? Let us know below.
 
they wrote on vgleaks that 1 (one of the 4 that r in the Shader core or compute unit) simd execute 1 op on 64 threads
but at the same time they used a different formula to calculate the ops/clock

12shader cores * 4 simds ( in every shader core) * 16 threads (i dont know where they toke this seen that they sayng every simd of the 4 can do 64 threads and not 16)

plus the L1 is different from the GCN and is long 256 not 64

if u do the formula using what they saying about the threads

is 12*4*64 =3072ops/clock!?!?!?!?!

GCN executes the same instruction for 4 cycles if im not mistaken.
Maybe that is the place where confusion happened.

You have to divide by 3072 by 4. Either way MisterX guy thinks that each compute unit works on a single cycle
but that is, not how GCN works because each compute unit repeats the same instruction for 4 cycle so you get so that is where he gets the 64 threads from i would guess.
Im pretty sure this is all in the OpenCL programmers guide.
 
what's wrong with wanting a total package?

You can't freaking deny what I said is incorrect. It will be downright pitiful if these consoles have to drop back down to 1280x720p to run games.

Couple with the fact that we are reaching the point in time where people will be upgrading their TV's again, the last thing I want is a damn console that can't do full hd a few years from now.

That's why PC GPU's cost more than these entire systems?

Either company could make the 'fanboy's wet dream' that everyone is gagging for... but people won't pay much more than $499 or £399 for that...

So, sure it's nice to wish for things, but the people on here talking up 'secret sauce' and all sorts of other wishful think BS are the self same people who will have toddler style tantrums when the price is high.

And actually if they can run 720p with shit hot internal rendering IQ with high quality AA, then games will look good - part of the reason current gen games look bad at 720p is muddy low res textures, sub-HD internal rendering, and poor IQ in general.

Look how great Wii games look once the IQ gets sorted by an emulator.

Most games will run 1080, but as we've seen on PS4, thats likely to be at 30fps.

I trust designers to make the choices that are right for their games.
 
Status
Not open for further replies.
Top Bottom