Predict: The Next Generation Console Tech

Status
Not open for further replies.
It's well know that architectures improve efficiency over time. There have been 7 architecture changes from Xenos to GCN. You think a doubling of efficiency over that many generations is laughable?

What low expectations you have. The SIMD setup of GCN is far more efficient than the old Vec4+1 of Xenos and I've heard around 2x quoted a few times on these forums.

100% is a big number for you to go slinging around, comparing GCN powered 7850 with a 6850 at the same clocks the cards perform very close to each other, there is a slight advantage to the 7850 though.

Even comparing a 5770 to a 7770 at the same clocks ( Both cards have the same bandwidth, fill and texel rates ) the performance while faster on the 7770 shows an improvement in shader architecture of around 40% on average which meens that AMD would have to of made an improvement of ~60% between Xenos VLIW4+1 and the VLIW5 powered AMD 5000 series.

In theory one could argue an increase of 100% but in reality it would more then likely be lower.
 
Even comparing a 5770 to a 7770 at the same clocks ( Both cards have the same bandwidth, fill and texel rates ) the performance while faster on the 7770 shows an improvement in shader architecture of around 40% on average which meens that AMD would have to of made an improvement of ~60% between Xenos VLIW4+1 and the VLIW5 powered AMD 5000 series.

As far as I know, Xenos is not a VLIW architecture...
 
Are you sure? AMD have used VLIW since R300 so it seems a bit odd they would not use it for Xenos

The first VLIW architecture from AMD was R600. R300 was a traditional SIMD architecture. Xenos can issue up to instructions per cycle per unit: one 4-way vector instruction and one scalar instructions. That's quite different from how R600 works.
 
The first VLIW architecture from AMD was R600. R300 was a traditional SIMD architecture. Xenos can issue up to instructions per cycle per unit: one 4-way vector instruction and one scalar instructions. That's quite different from how R600 works.

According to Google the first AMD card to use VLIW was the R300 powered 9700 series.
 
http://www.anandtech.com/show/2231/4

Given this setup (last picture) it doesn't seem likely that R300/R400/Xenos are VLIW designs. Even if they are, the architecture is still quite different from R600, since they do not issue 5 independent scalar ops per cycle, but one vec4 op and one scalar op per cycle.

Also from anandtech

The use of VLIW can be traced back to the first AMD DX9 GPU, R300 (Radeon 9700 series). If you recall our Cayman launch article, we mentioned that AMD initially used a VLIW design in those early parts because it allowed them to process a 4 component dot product (e.g. w, x, y, z) and a scalar component (e.g. lighting) at the same time, which was by far the most common graphics operation. Even when moving to unified shaders in DX10 with R600 (Radeon HD 2900), AMD still kept the VLIW5 design because the gaming market was still DX9 and using those kinds of operations.

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/2
 
Appearently they bundled the vector op + the scalar op in a single VLIW instruction ("VLIW2"). Still, VLIW5 in R600 is a different architecture. So we have to consider the efficiency gains going of at least two architectural jumps, from Xenos to VLIW5/VLIW4 and from VLIW5/VLIW4 to GCN.

VLIW 4 is more efficient then VLIW 5 hence why AMD moved back to VLIW 4 with the 6000 series.

If I remember correctly I'm sure AMD quoted a 10% improvement moving back to VLIW 4 from VLIW5.

It also seems to be that VLIW is better for graphics but not very good for compute hence why AMD moved to GCN as they wanted a more rounded architecture.

VLIW 4 to GCN gaming wise is 20-30% in favour of GCN from what I've seen.

Either way I would not expect a modern shader to be much more then 50-60% more efficient then Xenos.
 
There's this thing that bug me since the 6000 series announcement

WLIV4 was really short lived compared to any other architecture and amd had a good enough architecture and a future proof one coming.
Can WLIV4 be developed for use in at least one console? good on graphic and who care about computing
 
Wow, this is why I try to say nothing, it's amazing what people will infer from almost no data.

Think about things logically. What are the important features required in a console, from the perspective of the manufacturer?

1. Enough power that users can see a clear reason to upgrade.
2. Low manufacturing cost.
3. Path to lower manufacturing costs, to enable profits and lower prices.
4. Design that meets all regions regulations.

People keep asking why a company would back off the 200mm^2+ designs of the last generation. Simple, for reasons of #2 and #3. Process reductions are becoming more expensive, and taking longer to achieve. For that reason you have to start with a smaller upfront cost.

And note that #1 is not "as much power as we can fit", it's a much lower requirement.
Anyways, off to lunch.
 
Yeah that sounds reasonable from a console makers perspective. No one wants to risk the loss up front model anymore (ironically except Nintendo?? lol).
 
VLIW 4 is more efficient then VLIW 5 hence why AMD moved back to VLIW 4 with the 6000 series.

If I remember correctly I'm sure AMD quoted a 10% improvement moving back to VLIW 4 from VLIW5.

They did not "move back"... VLIW4 is very different from Xenos/R580/etc.

VLIW 4 to GCN gaming wise is 20-30% in favour of GCN from what I've seen.

Either way I would not expect a modern shader to be much more then 50-60% more efficient then Xenos.

I wouldn't be so sure, it is probably very workload dependent. The more complex the shaders are, the larger is the benefit from modern architectures and one could argue that games nowadays tend to have simple shaders because consoles are the lowest common denominator. Modern architectures differ from Xenos in several ways, including better latency hiding by keeping more threads in execution, read/write caches, more efficient handling of branches, etc.
 
bkillian, are you saying 200mm^2+ chips are not attractive from a cost perspective assuming there were two of them last gen?
 
Last edited by a moderator:
Wow, this is why I try to say nothing, it's amazing what people will infer from almost no data.

Think about things logically. What are the important features required in a console, from the perspective of the manufacturer?

1. Enough power that users can see a clear reason to upgrade.
2. Low manufacturing cost.
3. Path to lower manufacturing costs, to enable profits and lower prices.
4. Design that meets all regions regulations.

People keep asking why a company would back off the 200mm^2+ designs of the last generation. Simple, for reasons of #2 and #3. Process reductions are becoming more expensive, and taking longer to achieve. For that reason you have to start with a smaller upfront cost.

And note that #1 is not "as much power as we can fit", it's a much lower requirement.
Anyways, off to lunch.

This falls in-line with my expectations at least for the PS4. Xbox could have more features and chip customisations.

For the PS4:

APU/SoC with 1.8-2.0TF total power.
4GB RAM
~250mm^2
<150W total system power consumption while gaming and very low idle/non-gaming consumption.

I think that is reasonable while also giving a ~8x jump in power which in a closed box will be enough.
 
If what is being said now-stuck with shitty hd7750 notebook level gpus for ten years- is true i will change consoles for pc or steambox...
 
They did not "move back"... VLIW4 is very different from Xenos/R580/etc.



I wouldn't be so sure, it is probably very workload dependent. The more complex the shaders are, the larger is the benefit from modern architectures and one could argue that games nowadays tend to have simple shaders because consoles are the lowest common denominator. Modern architectures differ from Xenos in several ways, including better latency hiding by keeping more threads in execution, read/write caches, more efficient handling of branches, etc.

All the things have seen have described VLIW4 or VLIW4+1 and all I seen saying otherwise is your words, no disrespect but I trust the multiple sources on the Internet over your words.

Even the anandtech link you provided showed AMD moving from 4+1 to 5.

I find it highly unlikely that AMD would go against every DX9 GPU architecture they have produced and not use VLIW for Xenos. Especially when Xenos was being designed there was really no alternatives to VLIW.
 
Wow, this is why I try to say nothing, it's amazing what people will infer from almost no data.

Think about things logically. What are the important features required in a console, from the perspective of the manufacturer?

1. Enough power that users can see a clear reason to upgrade.
2. Low manufacturing cost.
3. Path to lower manufacturing costs, to enable profits and lower prices.
4. Design that meets all regions regulations.

People keep asking why a company would back off the 200mm^2+ designs of the last generation. Simple, for reasons of #2 and #3. Process reductions are becoming more expensive, and taking longer to achieve. For that reason you have to start with a smaller upfront cost.

And note that #1 is not "as much power as we can fit", it's a much lower requirement.
Anyways, off to lunch.

I'm not convinced that a ~1TFLOPS system will be able to achieve this, its becoming increasing clear that Microsoft no longer care about gaming.

Next-box will be a set top box through and through, gaming will be a only be a small supplement to media features this time around I'm betting.

Edit: Understood..
 
Cut out the hope/regret opinion talk or be exiled from this corner of the board. I'm really getting sick of it now and feel it's time for a new New Year tightening of standards around here.
 
I'm not convinced that a ~1TFLOPS system will be able to achieve this, its becoming increasing clear that Microsoft no longer care about gaming.

We were always going to get to a point where hardware progression would be slowed down due to power and heat, it seems we have hit that point.
 
Status
Not open for further replies.
Back
Top