NG Tech Budget: How low is too low?

How low is too low?

  • 120mm2 - (existing xb360/PS3 SOC, clock it higher, and throw in some new waggle)

    Votes: 0 0.0%

  • Total voters
    27

TheChefO

Banned
There's been a lot of speculation on how the nextgen chipsets will shape up. Increasingly, more and more rumors are popping up of very low spec consoles for nextgen.

So the question is, how low will the spec have to be to force you to jump out of console land and onto pc ... Where you say enough is enough and the value proposition isn't there anymore (or an ipad for the tablet lovers out there! :smile: )?

Pick the one which is the lowest you would honestly purchase for a nextgen gaming console.

Current projected die budget for xb360/ps3 SoC at 28nm is ~118-126mm2.

120mm2 - (existing xb360/PS3 SOC, clock it higher, and throw in some new waggle)
178mm2 - (HD6670 Turks 118mm2 + Cell/Xenon OC 60mm2)
210mm2 - (HD7770 Cape Verde 120mm2 + 1.5x Cell/Xenon 90mm2)
370mm2 - (HD7850 Pitcairne 250mm2 + 2x Cell/Xenon 120mm2)
472mm2 - (HD7950 Tahiti 352mm2 + 2x Cell/Xenon 120mm2)


**Note: if your response is: "I don't care what spec as long as it's Sony/MS or has xx game then I'll buy that", then your selection is "the lowest" as hardware isn't a factor for you.


I'm not throwing price in the mix as that is up to the platform holder what they are going to charge but I think it's safe to assume the bottom end is ~$250 and the top end is ~$500.
 
Last edited by a moderator:
Surely it depends entirely on the experience? As a home-box, traditional experience, I'd want higher. If they release something new like a tablet console or some new Kinect2 super tech or something unexpected, lower specs may suffice. Without that knowledge, the value-proposition cannot be quantified and people are just picking random options based solely on a prejudice for metrics. You may be surprised what you'd buy performancewise if the experience was there.
 
Surely it depends entirely on the experience? As a home-box, traditional experience, I'd want higher. If they release something new like a tablet console or some new Kinect2 super tech or something unexpected, lower specs may suffice. Without that knowledge, the value-proposition cannot be quantified and people are just picking random options based solely on a prejudice for metrics. You may be surprised what you'd buy performancewise if the experience was there.

True.

We saw with Wii how last gen tech could manage great sales and acceptance due to the novel new interface.

I'm assuming the interface will be built on known entities (Move/Kinect/gamepad) so the wow factor that Wii presented will be nullified.

I'm not seeing either Sony or MS coming up with something to trump Kinect/Move to base sales on, so those interfaces will likely be present either standard in the box, or as an accessory.


Bottom line though in spec, is what performance level is acceptable to potential buyers here in b3d regardless of external interface?

Wii was unique in that the only motion gaming available was on Wii. So adoption even among techies was high due to the unique and novel interface and low price.

With Move/Kinect, that novelty is no longer a unique experience to the new ps4/xb720 so if one wanted to, one could get that experience right now on xb360/ps3.

Thus, I'm not expecting nextgen consoles to lean on the concept of a "new interface" being the killer app.

A better version of the already known experience is the call to action on their end.
 
Last edited by a moderator:
OT
Valhalla is that tiny? 8O
?OT

EDIT

I've found 168 sq.mm with a google search.

EDIT 2
Chef I'm not sure I get your calculation. Like what do you mean by 1.5x xenon at launch? Etc.

As a reference coming from the animated "predict' thread" and looking at llano and PowerA2 module size at respectively 45nm and 32 nm I believe that a 6 cores, 3 SIMD, 16 ROPs would take around 3/4 the size of llano. That's around 180 nm. For the incomplete hd6670 GPU on TMSC 28nm process it could end around 80sq.mm (or south of). That's let's say 260mm2

So do I get your post properly if I vote too low is 210mm2 which is: HD7770 Cape Verde 120mm2 + 1.5x Cell/Xenon 90mm2 (the end is obscure to me)?
 
Last edited by a moderator:
OT
Valhalla is that tiny? 8O
?OT

The ~120mm2 size is projected to 28nm. Current size is 40-45nm.


Chef I'm not sure I get your calculation. Like what do you mean by 1.5x xenon at launch? Etc.

1.5x Zenon is difficult due to the odd core count, so in this case it would be 4 cores + bigger cache + better VMX units or OoOE.

Cell would be 9SPE + OoOE PPE + bigger cache.

Whatever fits for the above in the die budget for both.

So do I get your post properly if I vote too low is 210mm2 which is: HD7770 Cape Verde 120mm2 + 1.5x Cell/Xenon 90mm2 (the end is obscure to me)?

If that is the lowest you would buy, then yes! :smile:
 
Last edited by a moderator:
You can't judge performance in isolation from how much it'll cost you and what software and services it'll run and what input device it'll use - you have to buy the thing and then you use it to do stuff.

Even if you could, judging it by die area would be troublesome - most people wouldn't know a 45nm in-order CPU core from a 32nm OoO dual core module and even if they'd heard of those things wouldn't be qualified to do much more than regurgitate misinterpreted factoids from developers or tech websites (however fun that might be, I like doing it). And never mid that there might be disabled or reserved sections of the chip, or that the hardware might me hidden behind an abstraction layer and managed code, or that the processor architecture might be cutting edge or outdated or primarily designed for a different role.

On the PC you can clearly see just how much of a mistake it would be to make a performance and/or value judgement based solelyon die size - a gamer would be hard pressed to say that Bulldozer was a better choice than Sandybridge because Sandybridge is too smal ("I simply won't go smaller than 250 mm^2!!!").

I don't like the way that "waggle" - the only reference to non standard input devices - is lumped only with the very smallest die size, and that the very smallest die size is described as being a last gen system clocked higher (which would offer awful PPW over a newer solution). This is further compounded by it also being the only option for people that aren't willing to make a purchase decision based solely on processor die size.

Finally, "Tech budget" could incorporate all kinds of things from R&D to IP licensing to paying for the PSU to paying for the cooler (a faster smaller chip could outperform a slower larger chip). You have to make a big leap to shrink "Tech Budget" down to just die size, and a number of further leaps to get from die size to performance and finally to a value judgement for an individual about whether a platform is worth buying.
 
You can't judge performance in isolation from how much it'll cost you and what software and services it'll run and what input device it'll use - you have to buy the thing and then you use it to do stuff.

Even if you could, judging it by die area would be troublesome - most people wouldn't know a 45nm in-order CPU core from a 32nm OoO dual core module and even if they'd heard of those things wouldn't be qualified to do much more than regurgitate misinterpreted factoids from developers or tech websites (however fun that might be, I like doing it). And never mid that there might be disabled or reserved sections of the chip, or that the hardware might me hidden behind an abstraction layer and managed code, or that the processor architecture might be cutting edge or outdated or primarily designed for a different role.

On the PC you can clearly see just how much of a mistake it would be to make a performance and/or value judgement based solelyon die size - a gamer would be hard pressed to say that Bulldozer was a better choice than Sandybridge because Sandybridge is too smal ("I simply won't go smaller than 250 mm^2!!!").

I don't like the way that "waggle" - the only reference to non standard input devices - is lumped only with the very smallest die size, and that the very smallest die size is described as being a last gen system clocked higher (which would offer awful PPW over a newer solution). This is further compounded by it also being the only option for people that aren't willing to make a purchase decision based solely on processor die size.

Finally, "Tech budget" could incorporate all kinds of things from R&D to IP licensing to paying for the PSU to paying for the cooler (a faster smaller chip could outperform a slower larger chip). You have to make a big leap to shrink "Tech Budget" down to just die size, and a number of further leaps to get from die size to performance and finally to a value judgement for an individual about whether a platform is worth buying.

I'll grant you that your post has a lot of merit in the complexities of what goes into the die to get x performance out.

Having said that, there's no magic sauce here for most performance metrics outside of intel.

We know AMD will be providing the GPUs and from what IBM said, they will be handling the CPUs.

We generally know what a 28nm 250mm2 AMD GPU chip will perform like given reasonable thresholds in console environments, and what 120mm2 will get you.

For the IBM CPU, it is a bit trickier depending on what direction MS/Sony are looking to go, but given a die size of 90-120mm, we can expect a certain level of performance, given what we know about the die sizes of Cell and Xenon and their performance.


So with these knowns, we can get an estimate for what to expect out of overall system performance and based on that, what we would be willing to accept as a consumer, regardless of the accessories or periphery.
 
I'll grant you that your post has a lot of merit in the complexities of what goes into the die to get x performance out.

Having said that, there's no magic sauce here for most performance metrics outside of intel.

We know AMD will be providing the GPUs and from what IBM said, they will be handling the CPUs.

We generally know what a 28nm 250mm2 AMD GPU chip will perform like given reasonable thresholds in console environments, and what 120mm2 will get you.

For the IBM CPU, it is a bit trickier depending on what direction MS/Sony are looking to go, but given a die size of 90-120mm, we can expect a certain level of performance, given what we know about the die sizes of Cell and Xenon and their performance.

I think that IBM will probably be able to make a more efficient processor now they could in 2005 both in terms of performance-per-Watt and per mm^2, so the whole new-vs-old could be very significant if older chips were recycled in the Wii-type scenario. And per-Watt seems to be particularly important given the kind of power ceiling that PS360 seemed to be pushing against and the form factor that the WiiU is targeting. I'm really quite interested to see how much juice all the next generation systems consume.

So with these knowns, we can get an estimate for what to expect out of overall system performance and based on that, what we would be willing to accept as a consumer, regardless of the accessories or periphery.

There just seem to be so many unknowns; Cell is in some ways competitive with much bigger and hotter processors, but in games can easily struggle against other, "weaker" processors ; a SoC could be at a great disadvantage against two moderately large chips, but with sufficiently tight integration there could be some relevant workloads were it would be vastly more capable.

Edram is another big issue - the 360 could have used a 256-bit memory bus and been, overall, a better system with no framebuffer restrictions and no need to tile and spend CPU time resubmitting overlapping geometry and/or doing more culling work. This would probably have made the 360 more expensive to make (at least over the long term) but would have been reflected in a huge decrease in the amount of on-GPU-package silicon.

I really don't know what to expect from the next generation systems yet and it's actually quite exciting.
 
Interesting results so far.

It seems most people are with me (small sample size, but it makes my case! :devilish: ) in that a Pitcairn-class GPU is about as low as these guys should be shooting for.

Having said that:

One alternate choice which I did not post was the possibility of a 2.5/3d stacked chipset comprising of a 100mm2 cpu and two 100mm2 GPU modules totaling a 300mm2 die size with high speed interconnect.

Not that I expect such a design as I'd expect the yields for such a complex design would be worse than if MS/Sony just bit the bullet and said screw it, we're going with a ~300mm2 GPU and a ~100mm2 CPU.

But such a design would help make up for the lack of transistor count by bumping interconnected bandwidth.
 
Well there is also the fact that there 180 sq.mm between two of the choices offered.
It's not like there is nothing in between.
120 -> +60 -> 180 -> +30 -> 210 -> +180 -> 380 -> +100 -> 480

Like most pools the things is biased to begin with ;)
 
Well there is also the fact that there 180 sq.mm between two of the choices offered.
It's not like there is nothing in between.
120 -> +60 -> 180 -> +30 -> 210 -> +180 -> 380 -> +100 -> 480

Like most pools the things is biased to begin with ;)

...

One alternate choice which I did not post was the possibility of a 2.5/3d stacked chipset comprising of a 100mm2 cpu and two 100mm2 GPU modules totaling a 300mm2 die size with high speed interconnect.

Not that I expect such a design as I'd expect the yields for such a complex design would be worse than if MS/Sony just bit the bullet and said screw it, we're going with a ~300mm2 GPU and a ~100mm2 CPU.

But such a design would help make up for the lack of transistor count by bumping interconnected bandwidth.

The reason the above poll choices were selected was to be relevant to products on the market.

Granted, there was a relatively large gap between 210mm2 and 370mm2, but there is also a relatively large gap between Pitcairn and Cape Verde. ;)

For a completely hypothetical solution, the above (stacked 300mm2) would fit that middle ground in my mind.
 
Interesting results so far.

It seems most people are with me (small sample size, but it makes my case! :devilish: ) in that a Pitcairn-class GPU is about as low as these guys should be shooting for.

Having said that:

One alternate choice which I did not post was the possibility of a 2.5/3d stacked chipset comprising of a 100mm2 cpu and two 100mm2 GPU modules totaling a 300mm2 die size with high speed interconnect.

Not that I expect such a design as I'd expect the yields for such a complex design would be worse than if MS/Sony just bit the bullet and said screw it, we're going with a ~300mm2 GPU and a ~100mm2 CPU.

But such a design would help make up for the lack of transistor count by bumping interconnected bandwidth.

You're on a website full of graphics whores and developers who want all the power they can get at any cost and still you only got slightly more than half above the centerline of your poll... something to consider.
 
I didn't vote, because I don't know the answer. I'd be very disappointed if the next systems aren't powerful and huge increase from the current gen, but at the end of the day even a weak box is going to provide entertainment that is worth my time and money. I still don't want to be identified as being ok with a weak box :)
 
Went for Pitcairn as well, mostly because it should be doable both from a die size aspect as well as fall within an acceptable wattage/heat envelope. Does not mean that I will give on the consoles if they are weaker than that though.

For me it is more about the look of the games rather than power as such.

What I want from next gen, is:

Better lighting/shadowing, please get rid of all the flickering shadows, especially the auto shadowing.
720p is OK, but some (good) AA would be much appreciated.
A bit more high res textures and better AF.

If those things can be done with whatever hardware, I'm in...
 
You're on a website full of graphics whores and developers who want all the power they can get at any cost and still you only got slightly more than half above the centerline of your poll... something to consider.

That's true, but at the same time, people in general can tell the difference with two boxes next to each other running the same content which one's better. In this day and age of social media, the word would get out quick if one box was clearly superior to the other. That won't enable a userbase landslide, but that is the way Xbox entered the market in the first place and managed to push 20 million and this with a branding/image problem/rejection.

Or if both are floundering in Wii-land, we may just get an overall shoulder shrug from the masses that say, "My cheaper box that I own right now looks roughly the same, why bother? In fact, I think I'll take the money I was saving for the new box and buy a *insert_favorite_alternate_here* instead".


I think it would be foolish for Sony/MS to tempt fate where the alternate is growing stronger by the day and at the same time, gaining social momentum.

The one thing those devices will never have is the ability to push a 200W TDP. Weaken power budget, and it strengthens the argument for the alternate.
 
That's true, but at the same time, people in general can tell the difference with two boxes next to each other running the same content which one's better.

In the past, when the differences were huge, yeah. Nowadays though, I'm not so sure... Could the average person tell the difference if one was x2 the power? Would such differences show up enough? Personally I would wager the average person would have no clue. As you reference, Xbox was probably 3-4x the raw computing power of PS2 with twice the RAM, and people chose the PS2 by ~7 to 1 over the Xbox, and the differences between power level then was greater than it is now.

I don't think that today, you can do enough with x2 the power to make the average consumer notice. What can you do? Go with x2 the res, 1080p instead of 720p? Many people don't have 1080p TVs even today. Double frame rates? People don't care that much about 60 fps now, many people can't even tell the difference anyway. Double poly counts? Can't tell the difference between a 20k poly model and a 40k poly model, if the artist is good. Double texture data? Virtual texturing solves this problem. More shader effects? I guess, but I just don't see x2 improvement for x2 power happening. I don't even see 50% improvement happening with x2 power.

People aren't going to bother to notice that, the way they don't notice many titles today look considerably better on one console or another, to our graphics whore eyes. They will notice that one is $299 and one is $399 though.
 
In the past, when the differences were huge, yeah. Nowadays though, I'm not so sure... Could the average person tell the difference if one was x2 the power? Would such differences show up enough? Personally I would wager the average person would have no clue. As you reference, Xbox was probably 3-4x the raw computing power of PS2 with twice the RAM, and people chose the PS2 by ~7 to 1 over the Xbox, and the differences between power level then was greater than it is now.

I don't think that today, you can do enough with x2 the power to make the average consumer notice. What can you do? Go with x2 the res, 1080p instead of 720p? Many people don't have 1080p TVs even today. Double frame rates? People don't care that much about 60 fps now, many people can't even tell the difference anyway. Double poly counts? Can't tell the difference between a 20k poly model and a 40k poly model, if the artist is good. Double texture data? Virtual texturing solves this problem. More shader effects? I guess, but I just don't see x2 improvement for x2 power happening. I don't even see 50% improvement happening with x2 power.

People aren't going to bother to notice that, the way they don't notice many titles today look considerably better on one console or another, to our graphics whore eyes. They will notice that one is $299 and one is $399 though.

Two things:

1) Yes, PS2 sold better, but there were many other factors which contributed to it's success and other factors which limited Microsoft's success. Those positive and negative factors have been neutralized for the most part by now. Sony and Microsoft are on fairly equal grounds in the eyes of most. They offer mostly the same major franchises, the brand equity is relatively high for both, and social acceptance is relatively high for both.

So to assume that people just didn't care about the graphic differences between ps2 and Xbox and that's why they chose ps2 is to ignore the many other factors which put ps2 head and shoulders above xbox at the time. Think of it this way, how well would xbox have sold if NOT for the superior hardware?


2) The price argument is valid only if both machines are equally desirable. Wii was $99 this past Christmas ... with bundles. Still, it only managed to sell roughly half what the Full Retail $200-$400 Xbox360 sold.

At the outset of ps3 vs xb360, $500-600 vs $300-400. The interesting thing is recently with ps3 being $250 and xbox360 being $300 (HDD), Xbox360 still sold substantially more than ps3.

Going back a bit, further Dreamcast was $199 vs $300 PS2. Gamecube was $200 vs $300 PS2.

Bottom line, price is only important if all other aspects are equal. For this discussion, it doesn't get much more equal than ps3 and xb360, yet xb360 is still on top.


It's all about creating a desirable product. Part of that, is spec.
 
Ninjaprime said:
In the past, when the differences were huge, yeah. Nowadays though, I'm not so sure... Could the average person tell the difference if one was x2 the power? Would such differences show up enough? Personally I would wager the average person would have no clue. As you reference, Xbox was probably 3-4x the raw computing power of PS2 with twice the RAM, and people chose the PS2 by ~7 to 1 over the Xbox, and the differences between power level then was greater than it is now.

I don't think that today, you can do enough with x2 the power to make the average consumer notice. What can you do? Go with x2 the res, 1080p instead of 720p? Many people don't have 1080p TVs even today. Double frame rates? People don't care that much about 60 fps now, many people can't even tell the difference anyway. Double poly counts? Can't tell the difference between a 20k poly model and a 40k poly model, if the artist is good. Double texture data? Virtual texturing solves this problem. More shader effects? I guess, but I just don't see x2 improvement for x2 power happening. I don't even see 50% improvement happening with x2 power.

People aren't going to bother to notice that, the way they don't notice many titles today look considerably better on one console or another, to our graphics whore eyes. They will notice that one is $299 and one is $399 though.

People arent blind, they might not care to much but they can SEE!

All my friends who have a x360 and don't have a particularly large passion for gaming (they have maybe 2-3 games and play rarely) still manage to tell that FIFA on my ps3 runs worse than on their 360. If they played it recently. (frame rate and slightly worse graphics).. They are definitely normal average people who might play 1-2 times a month and less! (note: they don't say stuff like framerate but say the pace seems a but different etc. I tell them about frame rate at that point and they think I know how to hack NASA...)

Ur ps2 vs Xbox analysis does not make sense.

It's like saying because a basic Porsche boxster sells more than a 811 turbo - yet - it has less power than a 911 turbo, even if the other is 2x more powerful, people do not care about horsepower!!!! The fact that it's costs about 3x much doesn't matter right?

If Porsche sold the 911 carrera4 at the same price as the careera4s (looks the same but more powerful) and people for some reason picked the less powerful one (citing fuel efficiency or whatever) it would be a valid point. Same applies to ur little ps2 vs Xbox theory.

combined the ps3 and x360 sold much more than the wii (revenue). I wonder why, if casuals don't see differences in graphics...
 
Last edited by a moderator:
You're on a website full of graphics whores and developers who want all the power they can get at any cost ...

If that were true, you'd see every vote going for Tahiti.

The response here has been very reasonable and the die size of the (by far) most popular selection is nearly 100mm2 less than ps3's die size. Considering also that Pitcairn is officially unveiled now and with a die size 38mm2 less, that puts the total die size even smaller at 332mm2 which is smaller than a Tahiti core by itself :oops:.

Again:
xb360 die size, 438mm2
ps3 die size, 470mm2

So the vast majority here of "graphics whores at any cost" are choosing a minimum die size of 100mm2 less than either ps3 or xb360 ...

Id say it's a very reasonable expectation from a very reasoned crowd here at B3D... Nothing outlandish or rabid about it.
 
Back
Top