Predict: The Next Generation Console Tech

Status
Not open for further replies.
What do all these links you provide to Crysis and UE interviews really got to do with predicting what's in the next-gen boxes? Rein and Yerli aren't designing these boxes. They don't have to consider business strategy. They don't have to worry about costs. You may as well interview a load of gamers as to what they want to gauge what the next boxes will have. MS and Sony will consult with a lot of people and make a lot of choices. Can we please move away from twitter feeds and PR articles as sources for technical considerations?
 
So let me ask a question about the GPU/CPU differently. What do you guys expect the combined TPD to be for both?

I can't believe that the budget for both will be more than 100W. And it would disappoint me, but I could see it as low as 75W.

The top Trinity APU (32nm) from AMD has a TPD of 100W. It has 4 Piledriver cores at 3.8 GHz and 384 VLIW4 Shaders at 760 MHz (~580 Gigaflops).

Assuming the consoles will be 28nm, that's only a half node step from 32nm. I think that by changing the cores from 4 Piledrivers to 8 Jaguars and lowering the clock would buy you enough TPD and maybe area to increase the GPU performance 2-2.5X (768 shaders @ 750-1Ghz = 1.15-1.5 Tflops).
With the supposed eSRAM, I think this could give you performance of a 7850 GPU in actual real world applications (games). I think this would be easily 10x the performance of the original 360 (once again in actual games, not theoretical raw performance)

I really I think MS will want to go with an APU design from the start and keep the TPD at no more than 150W (comparable to the Falcon revision).
 
Can we trust in the Durango devkits pics?

the final GPU can't be "less" powerful than the devkit GPU (that's what lherre said).

Sure it could, the dev kits could be underclocked, the devs might also be given information about specific performance targets will be say 80% of devkit graphics performance. An early stage devkit would be more about getting the right environment, not the right performance.
 
Sure it could, the dev kits could be underclocked, the devs might also be given information about specific performance targets will be say 80% of devkit graphics performance. An early stage devkit would be more about getting the right environment, not the right performance.

If the pics of the last devkit are real, we need to find the GPU inside.
 
Are we in agreement that both MS and Sony will use 4x Bluray drive? Whats the state of the current consumer models, are there any 6x/8x models out there?

[flops capacitor is the best tag]
 
Are we in agreement that both MS and Sony will use 4x Bluray drive? Whats the state of the current consumer models, are there any 6x/8x models out there?

[flops capacitor is the best tag]

This is supposedly one of the fastest blu-ray drives out there http://www.lg.com/au/burners-drives/lg-CH10LS20.AYBR10B-blu-ray-drive

It supports 8x CAV speed(~36MB/s) which is pretty good, probably much better than what some people expected blu-ray to be capable of... I don't expect this fast drive on next gen consoles though. 4-6x CAV is more realistic.
 
Last edited by a moderator:
If the pics are real, then we have the "minimum" flops for Xbox 720.

No you have a picture of an alpha devkit.
Even if the part were selected to give an indication of performance, it could be under clocked, the BIOS could be updates to disable shaders etc etc etc.
The same goes for whatever is in the other alpha kits.

Given we have on VLIW4 and one VLIW5 card as possibilities, it doesn't even narrow that down.

Edit I forgot 69xx cards were VLIW4 not VLIW5
 
So they send different dev kits to different studios? Oh and ERP, since you are knowledge on subject, shouldn't developers already have dev kits with real silicon by now?
 
VLIW4 to me looked strangely short lived, can be developed primarly for console and used as stopgag for the pc until gcn?
What is best suited for the tipical console workload, wliv4 or gcn/gcn si?
 
No you have a picture of an alpha devkit.
Even if the part were selected to give an indication of performance, it could be under clocked, the BIOS could be updates to disable shaders etc etc etc.
The same goes for whatever is in the other alpha kits.

Given we have on VLIW4 and one VLIW5 card as possibilities, it doesn't even narrow that down.

Edit I forgot 69xx cards were VLIW4 not VLIW5

Then, we can expect more than 1.5tflops... :?:
 
So they send different dev kits to different studios? Oh and ERP, since you are knowledge on subject, shouldn't developers already have dev kits with real silicon by now?

There's no chance of real silicon while more than a year from launch.
 
So they send different dev kits to different studios? Oh and ERP, since you are knowledge on subject, shouldn't developers already have dev kits with real silicon by now?

When Xbox 1 launched we didn't see kits with real silicon until late spring of the year it shipped. And then they were in extremely short supply.
360/PS2 and PS3 did better than that, but still only 6-10 months before launch.

Sony usually does better than MS at this because traditionally they have used different hardware (devkits aren't retail units) in the devkits. MS has traditionally used a variation on the retail unit as the devkit, which requires final silicon (or close to) to be available.

My guess would be that MS would be targetting year end at the earliest and it'll be March or later before final devkits are available.
 
VLIW4 to me looked strangely short lived, can be developed primarly for console and used as stopgag for the pc until gcn?
What is best suited for the tipical console workload, wliv4 or gcn/gcn si?

For a GPU a console doesn't really have a different workload perse.
PC GPU's do tend to be backwards looking in so far as if you look at which paths through the silicon are fast, it's always the key titles from the last 12 months, so titles like Rage or Crysis actually drive/stagnate development, but usually after they've shipped.
On a console, games will use the resources as they are presented, so there is scope to be more forwards looking, it's hard to say if they actually will be.

The only thing interesting about the choice of card in the devkit is that it's not GCN, MS have to write a shader compiler, and it's easier to write one, than two, one for the alpha kits and one for the final silicon.
Having said that it's possible they couldn't source enough GCN based cards when the dev kits shipped to developers, I've also heard a rumor of a late GPU change on the MS side, but I have no way to confirm it, so you can't even infer much from that.
 
I have no idea.
Any number not from developer docs is worthless.
And I wouldn't get too enamored with flops as a sole measure of performance either.

But my logic tell me that they can't put a 2+tflops GPU on alpha devkits and change it for a 1.5tflops GPU on final devkit. Am I wrong?
 
Status
Not open for further replies.
Back
Top