Predict: The Next Generation Console Tech

Status
Not open for further replies.
Ok. Yeah that does sound like MS is massaging some info as I would agree with James Sawyer Ford. I'm sure there are features comparable to a 680, but not the raw power. "Performance" as used in aegies' quote "comparable to 680 performance" can be open to interpretation. Features may be the better interpretation.



Thanks.

What's this about the semiaccurate post?

But yeah I still lean much more towards Xbox 3's GPU "being less" than PS4's GPU. I just never got the same level of detailed info on Xbox 3 compared to what I got with PS4 to be as confident about how Xbox 3's GPU was shaping up.

Are you a developer? Where do you get your information from? I am only curious here. Just want to get a feel for things.
 
Are you a developer? Where do you get your information from? I am only curious here. Just want to get a feel for things.

Nah, not a dev. Just a regular poster. Wii U discussion from those who had info/dev kit access led to getting info on PS4 and Xbox 3 as well. The latter two being easier to obtain.
 
I really don't know. But it does on the face of it seem too good to be true.

It's funny the 680 is the GPU that is specifically compared though. I remember that Bkilian did say no one had guessed what the card in the leaked devkit picture was. I wonder if it was a 680?

Because 680 is used on several next gen games/engines.
 
I remember that old Xbox powerpoint that got leaked said they were aiming for 6x 360 performance, but later in the slide it also had 8x. I think that 6x was for just the games though.

6x would roughly be 1.2 tf so I wonder if they really did stick to that figure all this time. And then added extra DSPs to accelerate certain operations as an afterthought.

There must still be some relevant things from this slide. DSP? 2HD Dec? (decode I guess). Compositing? For video overlays?

yukon.jpg
 
Last edited by a moderator:
But yeah I still lean much more towards Xbox 3's GPU "being less" than PS4's GPU. I just never got the same level of detailed info on Xbox 3 compared to what I got with PS4 to be as confident about how Xbox 3's GPU was shaping up.

Are you a developer? Where do you get your information from? I am only curious here. Just want to get a feel for things.

It's actually going to be a lot harder for sony to have a slight hardware advantage this time. putting budget and manufacturing costs aside; Microsoft is putting the specs under tight raps this gen, to turn out with better hardware would take insight on your competition now. with info still being tight on MS's console Sony will have to take a guess if they were make such a gamble.

the only thing they could do (if they haven't done it already) is to continue to look towards AMD and see what's hot and what's not, or in search of something custom from them.
 
Nah, not a dev. Just a regular poster. Wii U discussion from those who had info/dev kit access led to getting info on PS4 and Xbox 3 as well. The latter two being easier to obtain.

If this Wii U discussion is a Forum, can you give us a link so we can check can check things out?
 
I remember that old Xbox powerpoint that got leaked said they were aiming for 6x 360 performance, but later in the slide it also had 8x. I think that 6x was for just the games though.

6x would roughly be 1.2 tf so I wonder if they really did stick to that figure all this time. And then added extra DSPs to accelerate certain operations as an afterthought.

There must still be some relevant things from this slide. DSP? 2HD Dec? (decode I guess). Compositing? For video overlays?

yukon.jpg

That is so old, I was told that was obsolete and that was some ideas from one person and not an actual direction (although some of that may be in there).
 
Thanks.

What's this about the semiaccurate post?

See my post:
http://forum.beyond3d.com/showpost.php?p=1693204&postcount=18196

But yeah I still lean much more towards Xbox 3's GPU "being less" than PS4's GPU. I just never got the same level of detailed info on Xbox 3 compared to what I got with PS4 to be as confident about how Xbox 3's GPU was shaping up.
That's why I also think that if it was 1.6 or 1.7 TF it would be too close in power for both your source and lherre to say it's definitively less than PS4, they'd say it was slightly less or about the same etc if there was only a 10% difference in FLOPS.

That and the fact that I haven't seen any Durango rumours mentioning a GPU in the 1.6-1.8 range, most of the rumours with FLOPS point to 1.2 TF or thereabouts (or go wildly over 2+, 3+ etc)
 
That is so old, I was told that was obsolete and that was some ideas from one person and not an actual direction (although some of that may be in there).

All the blue squares inside the orange square are not "customizations"?

EDIT: I mean, maybe some of them has been kept.
 
I remember that old Xbox powerpoint that got leaked said they were aiming for 6x 360 performance, but later in the slide it also had 8x. I think that 6x was for just the games though.

6x would roughly be 1.2 tf so I wonder if they really did stick to that figure all this time. And then added extra DSPs to accelerate certain operations as an afterthought.

There must still be some relevant things from this slide. DSP? 2HD Dec? (decode I guess). Compositing? For video overlays?

http://i.i.com.com/cnwk.1d/i/tim/2012/06/16/yukon.jpg[/img[/QUOTE]
Outdated


[quote="(((interference))), post: 764060"]See my post:
[url]http://forum.beyond3d.com/showpost.php?p=1693204&postcount=18196[/url]


That's why I also think that if it was 1.6 or 1.7 TF it would be too close in power for both your source and lherre to say it's definitively less than PS4, they'd say it was slightly less or about the same etc if there was only a 10% difference in FLOPS.

That and the fact that I haven't seen any Durango rumours mentioning a GPU in the 1.6-1.8 range, most of the rumours with FLOPS point to 1.2 TF or thereabouts (or go wildly over 2+, 3+ etc)[/QUOTE]
No one in semiaccurate said rangers was right,only rangers said himself was right
[url]http://semiaccurate.com/forums/showthread.php?t=6201&page=113[/url]
They don't even care what rangers said.

And "most of the rumours with FLOPS point to 1.2 TF or thereabouts" was just because you choose rumors you love(well,of course you're not the only one)
 
Last edited by a moderator:
Does anyone know what the bandwidth would be for this?
DDR3 2133 on a 384 bit bus + EDRAM
No, because the eDRAM isn't spec'd. What clock, what bus width? May as well ask the question:
"Does anyone know what the bandwidth would be for this?
GDDR5"
 
Does anyone know what the bandwidth would be for this?
DDR3 2133 on a 384 bit bus + EDRAM

DDR3 @ 2133 MHz on a 384bit bus would be = 2133 x 384 / 8 = 102384 MB/s or 102.3 GB/s bandwidth

As shifty said, eDRAM depends on the clocks etc. I don't think its as simple also as addin the two together and saying this value equals Durango's total system bandwidth.

I believe you'll more likely see 8GB of DDR3 1800 on a 256bit bus on Durango. So 57.6 GB/s bandwidth to main memory (even if they go DDR3 2133 on a 256 bit bus its only 68.3 GB/s bandwidth). A 256 bit bus is the widest they'd go without using chip stacking tech (the inclusion of eDRAM/ESRAM indicates they aren't) without giving them issues shrinking the chip later on. Unless Charlie was right all along and their chip is humongous :p
 
Last edited by a moderator:
I believe you'll more likely see 8GB of DDR3 1800 on a 256bit bus on Durango. So 57.6 GB/s bandwidth to main memory (even if they go DDR3 2133 on a 256 bit bus its only 68.3 GB/s bandwidth). A 256 bit bus is the widest they'd go without using chip stacking tech (the inclusion of eDRAM/ESRAM indicates they aren't) without giving them issues shrinking the chip later on.

Are you sure about the bold part? Unless 1024-bit and 512-bit stacking have similar costs, going for 512-bit stacking plus EDRAM might still make sense. Even with 1024-bit stacking, a relatively small low-latency scratchpad memory on chip (ESRAM) might still be useful for GPGPU.

In my opinion, the most unlikely scenario is 256-bit DDR3/DDR4 plus ESRAM. You jast can't put enough ESRAM on chip to compensate such a low main memory bandwidth (2.5x-3.5x last genarion). EDRAM is a different story, though.
 
If the latest Durango rumours are correct regarding the system memory bandwidth, the same that state Durango can access 1GB of DDR memory per frame, assuming that means a 60fps game we're talking:

DDR3/4 2133 on a 256bit bus, as this gives 68.3GB/s bandwidth. So in 16.7 ms frame time for a 60fps game you're looking at 1.14GB of DDR accessible per frame. Fits nicely:D

If they meant 1GB accessible for a 30 fps game then you're looking at half the bandwidth, i.e. either DDR3/4 2133 on 128bit bus or DDR3/4 1800 on a 128 bit bus (28.8GB/s gives 0.96GB accessible per frame), or pretty much that's it.

PS4 with 192GB/s memory bandwidth could access 6.4 GB/frame or 3.2 GB/frame on a 30 or 60 fps game respectively.

Obviously the addition of eDRAM/eSRAM on Durango will make this not such a clear comparison, however the point of the above was to try to nail down Durangos DDR specs and fit that with the "1GB accessible in a single frame" rumour.
 
Are you sure about the bold part? Unless 1024-bit and 512-bit stacking have similar costs, going for 512-bit stacking plus EDRAM might still make sense. Even with 1024-bit stacking, a relatively small low-latency scratchpad memory on chip (ESRAM) might still be useful for GPGPU.

In my opinion, the most unlikely scenario is 256-bit DDR3/DDR4 plus ESRAM. You jast can't put enough ESRAM on chip to compensate such a low main memory bandwidth (2.5x-3.5x last genarion). EDRAM is a different story, though.

DDR3/4 2133 on a 1024bit bus gives 273.0GB/s memory bandwidth. eDRAM of the 360 gave around that figure. What would be the point of adding such complexity and cost for little to no bandwidth gain?

If they had a unified 8 GB of DDR3 2133 stack on a 1024bit bus then that would be the de facto PERFECT memory setup. Adding anymore es or edram or any other such memory pool would be entirely superfluous and idiotic, adding cost for the sake of cost. There's no benefit at all.

The fact that they have ESRAM (rumoured) indeed points to the exclusion of stacking. I'm 99% sure that MS won't have stacked memory in Durango if they do indeed have esram/edram.
 
DDR3/4 2133 on a 1024bit bus gives 273.0GB/s memory bandwidth. eDRAM of the 360 gave around that figure.
Only to the ROPs. But otherwise you're right. 270 GB/s would be fine for a console without needing eDRAM which would contribute nothing. If Durango has ESRAM/EDRAM/eDRAM/RedRUM, the DDR3 will be on a traditional, slow bus, with the on-chip RAM providing the high-speed BW.
 
DDR3/4 2133 on a 1024bit bus gives 273.0GB/s memory bandwidth. eDRAM of the 360 gave around that figure. What would be the point of adding such complexity and cost for little to no bandwidth gain?

Are you implying that, if they are going with stacking, it will be 1024-bit and not 512-bit? Because 512-bit DDR3 at 1866 MT/s would give 119 GB/s which is barely enough without EDRAM/ESRAM. I guess you may be right, since when you add the cost of the interposer, it would probably be more cost effective to go all the way to 1024-bit rather than 512-bit wide.

If they had a unified 8 GB of DDR3 2133 stack on a 1024bit bus then that would be the de facto PERFECT memory setup. Adding anymore es or edram or any other such memory pool would be entirely superfluous and idiotic, adding cost for the sake of cost. There's no benefit at all.

ESRAM is typically used for caches and is currently present in pretty much all CPUs/GPUs available today, regardless of the main memory bandwidth. So I wouldn't call it superfluous and itiotic. :)

The fact that they have ESRAM (rumoured) indeed points to the exclusion of stacking. I'm 99% sure that MS won't have stacked memory in Durango if they do indeed have esram/edram.

A fairly large L2 cache on the GPU might be correctly quoted in rumors such as the GPU having ESRAM. It does not imply that the main memory bandwidth will be slow or that they aren't using stacking.
I doubt Microsoft can put 10-20 MB of ESRAM (you can't fit more into any reasonably sized SoC) to balance out having 1/3 the main memory bandwidth of their competitor. EDRAM is a whole different story, though.

To sum up, I expect one of the following scenarios:
1) 512-bit/1024 bit stacking + small amount ESRAM (maybe just a larger-than-usual L2 cache on the GPU)
2) 256-bit + large amount of EDRAM

Scenario (3), i.e. 256-bit DDR3/DDR4 with a small amount ESRAM, would be a terrible choice and I don't see that happening.
 
Are you implying that, if they are going with stacking, it will be 1024-bit and not 512-bit? Because 512-bit DDR3 at 1866 MT/s would give 119 GB/s which is barely enough without EDRAM/ESRAM. I guess you may be right, since when you add the cost of the interposer, it would probably be more cost effective to go all the way to 1024-bit rather than 512-bit wide.



ESRAM is typically used for caches and is currently present in pretty much all CPUs/GPUs available today, regardless of the main memory bandwidth. So I wouldn't call it superfluous and itiotic. :)



A fairly large L2 cache on the GPU might be correctly quoted in rumors such as the GPU having ESRAM. It does not imply that the main memory bandwidth will be slow or that they aren't using stacking.
I doubt Microsoft can put 10-20 MB of ESRAM (you can't fit more into any reasonably sized SoC) to balance out having 1/3 the main memory bandwidth of their competitor. EDRAM is a whole different story, though.

To sum up, I expect one of the following scenarios:
1) 512-bit/1024 bit stacking + small amount ESRAM (maybe just a larger-than-usual L2 cache on the GPU)
2) 256-bit + large amount of EDRAM

Scenario (3), i.e. 256-bit DDR3/DDR4 with a small amount ESRAM, would be a terrible choice and I don't see that happening.

You may be assuming ESRAM is 6T-SRAM as is indeed would be the case as you describe.

However the very fact that ESRAM is being rumoured as a "special" or "secret sauce" customisation, it may very well be that the ESRAM being rumoured is in actual fact a misunderstanding of 1T-SRAM, i.e. the same stuff in the game cube, which is as I understand it effectively eDRAM to all intents and purposes?

I have a hard time imagining people pimping ESRAM as a special addition to Durango when it exists in pretty much every chip as caches and what not. The highlighting of ESRAM in my opinion points to 1T-SRAM which MS could surely have enough on chip to provide a framebuffer or high bandwidth scratchpad.

Edit:
Actually reading more about 1T-SRAM i'm fairly certain now that this is what will be in Durango instead of eDRAM (it seems there are minor functional differences between the two). Mainly as 1T-SRAM is marketed as a SoC solution and would give MS more options for where they can have it fabbed than eDRAM.
 
Last edited by a moderator:
ESRAM is typically used for caches and is currently present in pretty much all CPUs/GPUs available today, regardless of the main memory bandwidth. So I wouldn't call it superfluous and itiotic.
There's some confusion here. SRAM is the tech we use to talk about embedded static RAM caches. As it's always embedded, there's no need to precede it with an 'E'. MS is reportedly using something called ESRAM which is supposedly 1T DRAM, or something. Some are making the distinction calling it ESRAM, whereas others of us are sticking to calling it EDRAM. I actually call it eDRAM as that's how I've always known it to be written. EDRAM (all caps) has been used to mean Enhanced DRAM as opposed to Embedded DRAM.
 
Status
Not open for further replies.
Back
Top