Predict: The Next Generation Console Tech

Status
Not open for further replies.
Can Oban simply be a new 360 revision semiaccurately considered a new product?

Potentially, but one thing that doesn't make sense to me would be the number of supposed wafers. He claimed around 10k 300mm wafers. Given that the XCGPU even with edram in the actual die, not just on the package, would be sub 200mm (probably closer to 150mm) @ 32nm, you'd get potentially millions of chips out of just this run. Yields would probably be very good for a chip this small.

IMO, that would would be a production run and conflicting with IBM's statement that full scale production isn't starting until late in the year. Perhaps he got the number of wafers wrong. Hell, 10k wafers for anything would probably cost $50-100 million.
 
Anyways, an SoC on day 1 doesn't really make sense for a high powered next gen console.

In general I agree.

However, if the die budget is comprised of a rather large GPU (400mm2) and a much smaller CPU (<100mm2), then the minimum size of a component would already be an issue with yield so not much more prohibitive to have the cpu on die.

Still, a disadvantage, but significantly less so than a balanced gpu/cpu split.

The advantage of a SoC from day one would be a simpler/cheaper transition of die shrinks and hypothetically would allow for very high bandwidth interconnecting the cpu and gpu which would be great for GPGPU.
 
Storage wise, MSAA is no different from SSAA. That is, MSAA stores colour and Z samples then determines triangle coverage. The difference with SSAA is that it multiplies the texture sampling per pixel whereas MSAA doesn't. The point of MSAA is that it's a special case of supersampling and it uses coverage/depth to effectively work only on polygon edges.

The texture sampling is not supersampled, so in-polygon pixels aren't affected. Since you've got per pixel shaders potentially using texture ops, you're not supersampling that either. That's why brute force SSAA can be so expensive for performance - you've got N texture samples per pixel and thus N x shader texture ops as well as increased texture bandwidth demands. You've got AF for more efficient sampling of the textures anyway although that clearly has little benefits for the shading. Increased texture sampling requires more bandwidth, not storage in memory, since you're just sampling the textures that already exist there.

I assumed he knew that much.
 
Worthless rumor. Xbox 3 GPU will at least be based on Southern Islands.

And your knowledge about this is based on what?

I mean, I am not saying you are wrong, but saying one rumor is wrong, because of a different rumor is a bit... well, unsubstantial.

Though I think a 6670 would is already pretty slow by todays PC standards. Also, this thing is merely 120mm² at 40nm and 66 Watts. I'd rather think of Wuu using something like this.
 
And your knowledge about this is based on what?

I mean, I am not saying you are wrong, but saying one rumor is wrong, because of a different rumor is a bit... well, unsubstantial.

Though I think a 6670 would is already pretty slow by todays PC standards. Also, this thing is merely 120mm² at 40nm and 66 Watts. I'd rather think of Wuu using something like this.

Just based on my own gut feeling. nothing more. Well, and earlier rumors of a Radeon 7000 based GPU.
 
http://www.ign.com/articles/2012/01/24/xbox-720-will-be-six-times-as-powerful-as-current-gen

HD6670 Fall 2013. Can't stop laughing how sad prospect that is.. almost surely fake. 20% better than Wii U with 480SPUs is intresting though

Six times as fast? Raw spec-wise that thing isn't even 4 times faster than Xenos. And look at the TDP. It's 66W at 40nm, at 28nm it won't be much more than 40W. Are they aiming for an handled or what?!

It's definitely not enough powerful for a next-gen console. Early adopters won't upgrade easily for just better texture or slightly better IQ, and it's too conservative on the point of view of the competition. It will be "easy" for Sony being visible better than that. I won't believe it for now :)

The low-end figure for next-generation should be at least 2 Teraflops, and it's sure possibile staying in the 100W ballpark for the GPU at 28nm.
 
http://www.ign.com/articles/2012/01/24/xbox-720-will-be-six-times-as-powerful-as-current-gen

HD6670 Fall 2013. Can't stop laughing how sad prospect that is.. almost surely fake. 20% better than Wii U with 480SPUs is intresting though

That's a disappointing prospect if that turns out to be true! As someone who already has a 5770, what would be the point of me even considering getting the platform? I could easily throw in a 6950 or whatever the 7000 equivalent is and never have to worry about upgrading. This is also without even bothering to consider the PS Vita/Smartphone/Tablet scene which buy 2013 will be capable of producing some pretty impressive graphics for gaming. Seriously anything less than a 6950 equivalent in terms of graphics capability is a total waste of time for them as buy the time 2013 comes around there will be numerous alternatives that could provide comparable gaming experiancs while offering more practical uses (smartphones, tablets and PCs).
 
Just for fun
Super-SoC design:
- 4 PowerA2 4SMT 16 InO threads -> 100m^2
- 2 Power7 2SMT ~ 4 OoO threads -> 100 mm^2
- 64 MB of eDRAM as L3 cache shared -> 50mm^2
- GNC Architecture, 24 CU, 48 TMU, 24 ROPs. ->250 mm^2
- 384bit GDDR5 shared memory controller + everything else -> 50 mm^2

12x2 Gigabit chips -> 3 Gb@4Ghz.

Grand-total of: 550 mm^2, and no more than 200W of TDP (just a guestimate).

In 2015-16 450mm wafer are expected, and mm^2 price will fall down more quicky. Having already everything packed in one die has some advandges for future die-shrink. What's the smallest a 384bit controller could shrink? 250mm^2?
 
Six times as fast? Raw spec-wise that thing isn't even 4 times faster than Xenos. And look at the TDP. It's 66W at 40nm, at 28nm it won't be much more than 40W. Are they aiming for an handled or what?!

It's definitely not enough powerful for a next-gen console. Early adopters won't upgrade easily for just better texture or slightly better IQ, and it's too conservative on the point of view of the competition. It will be "easy" for Sony being visible better than that. I won't believe it for now :)

The low-end figure for next-generation should be at least 2 Teraflops, and it's sure possibile staying in the 100W ballpark for the GPU at 28nm.

say MS release a Console that's only about 4X as powerful as the 360 but with a Kinect that works 10X as good as the Kinect that's out now & the console & new Kinect is sold together at $299.


you don't think people would pick this up over buying a 360 & Kinect at about the same price?

it would still be more powerful than the Wii U & might be cheaper to buy.

& even if the PS4 is 2X as powerful there is no guarantee that the visual difference will be good enough to make people choose the PS4 over the Next Xbox at a higher price if the graphics are good enough for the average person & the Next Xbox have the games that they want.
 
Xbox 720 and PS4 better offer ALEAST 6x the power...

Just running the graphics that 360 and PS3 are kicking out at native 1080p is going to consume just over 2x the power.
 
The X6 figures doesn't add up with HD6670 card specifications. The thing would need to be clocked pretty high to match that statement +1.2GHz.
Still sadly assuming a SoC, how much an HD kinect may cost, memory, etc, and sucky economy, the thing seems believable to me. Alstrong already made multiple time point on that matter (precisely on HD5670 lvl of perf). Hey 32nm grant us a bonus not the one we were expecting though, a fairly tiny one :LOL:

Any way and see for more infos, at this time it sounds in line with the lowest estimates around here. Even if the system were to peak pretty low in raw GPU perfs the whole story is not said only by FLOPS, what is the RBE configuration? The fixed function part of the GPUs ? Bus width etc.

Highest Llano SKU so with 4 very high performances OoO cores running at 2.9GHz and 400SP GPU running @600MHz has a TDP of 100Watts. I'm not sure MS would want to go higher has it implies already a potent cooling system (still at least it's only one). HD6670 level of performances means that assuming a TDP of 100Watts MS has less power left to play with for the CPU (even less if they aim at x6 the xenos raw throughput).
For ref Llano review:
http://www.anandtech.com/show/4476/amd-a83850-review/

I would expect 2GB GDDR5 for the system RAM, no EDRAM (or not used in the way people here expect, think simply for L3), a 128bits bus. This can get us anywhere between +50 and +80 GB/s of bandwidth, as MS seems to be pretty conservative on spec I hope they would be wise enough to spend what needed on RAM so the system can give its max. I hope that in regard to RBE the thing would be closer to a HD57xx or half a hd6850 than to a HD6670 ==> 16RBE.

In the context of what could looks like a conservative system I think that in regard to the CPUs we may not expect miracle either, not too agressive OoO execution engine with 4 way SMT added on top. I may not expect more than 4 cores. I would expect a widening of the SIMD.

Overall that if MS compromise on raw power they won't compromise on the memory subsystem, cache hierarchy.

EDIT
still a bit depressing as it may look like the thing could have been born using 40/45nm lithography
 
Last edited by a moderator:
say MS release a Console that's only about 4X as powerful as the 360 but with a Kinect that works 10X as good as the Kinect that's out now & the console & new Kinect is sold together at $299.


you don't think people would pick this up over buying a 360 & Kinect at about the same price?

it would still be more powerful than the Wii U & might be cheaper to buy.

& even if the PS4 is 2X as powerful there is no guarantee that the visual difference will be good enough to make people choose the PS4 over the Next Xbox at a higher price if the graphics are good enough for the average person & the Next Xbox have the games that they want.

There is a certain threshold where new rendering technology may become available, and i don't think that threshold is at 4x times the power, but more like 10x~15x.
 
The X6 figures doesn't add up with HD6670 card specifications. The thing would need to be clocked pretty high to match that statement +1.2GHz.
Still sadly assuming a SoC, how much an HD kinect may cost, memory, etc, and sucky economy, the thing seems believable to me. Alstrong already made multiple time point on that matter (precisely on HD5670 lvl of perf). Hey 32nm grant us a bonus not the one we were expecting though, a fairly tiny one :LOL:
Depends on what you mean, every single aspect over 6x Xenos? Sure, it's not, but for example shader power is way more than 6x that of Xenos
 
What's the smallest a 384bit controller could shrink? 250mm^2?

Probably closer to 300mm^2.

Alstrong already made multiple time point on that matter (precisely on HD5670 lvl of perf).
That was geared towards the +50% 360 rumours for the WiiU (regarding ALUs and TMUs and ROPs).

By the time the next Xbox launches though, 28nm ought to be the target process tech and a 6670 will indeed be rather tiny (as if it isn't already!).

I hope that in regard to RBE the thing would be closer to a HD57xx or half a hd6850 than to a HD6670 ==> 16RBE.
Just a small correction: RBE = 4 ROPs, so 4 RBE's is what you mean.

-------

IGN really needs to stop spreading the fud and semiaccurate news. (pun intended :rolleyes: )
 
That was geared towards the +50% 360 rumours for the WiiU (regarding ALUs and TMUs and ROPs).

By the time the next Xbox launches though, 28nm ought to be the target process tech and a 6670 will indeed be rather tiny (as if it isn't already!).
Actually I was referring to statement ( I believe) you made on the matter way before WiiU announcement, at the time we were assuming 45/40nm lithography as the process used and the odds for a low power system. Memory may not serve right though.

AMD is still using 40nm lithography for the HD6670 and that why the IGN rumor is non coherent. They speak of a gpu not a SoC having 6 times the power of Xenos and in the same time akin to the HD6670. If MS were to use two chips for their next system, producing a hd6670 doesn't make much sense as this level of perfs can be provided by an integrated GPU. It's indeed tiny and that why I use Llano as a ref (which is a SoC using 32nm lithography), it's a ~230mm2 chips @32nm that burns 100Watts with 4cpu cores and a lesser GPU (5 vs 6 SIMD arrays and running 200MHz slower).

It's still believable to me, we know nothing on MS positioning for their next product but I can see MS choosing kinect HD to be standard as well as an HDD. It all depends on the price bracket MS wants to be at launch. Systems have been there for 7 years still counting, usually most gamers don't jump in till price get lower. MS may want to reach a bigger market from scratch which hardcore gamers, technophiles here refuse to understand. It's about consolidating their position and avoiding the Nintendo pitfall which so far failed to consolidate its leading position.

EDIT
Taking in account marketing departement, MS may want to secure north 1TFLOPS (CPU+GPU) of so fashionable 'compute power'.
It's achievable with ~hd6670 and a quad core with 256bits wide SIMD.
 
IGN really needs to stop spreading the fud and semiaccurate news. (pun intended :rolleyes: )

Yeah really. They have done zero next gen reporting worth a flip. Their tech editor Scott Lowe (who wrote this 6670 piece) also knows very little about tech (caught plenty of his errors in past articles).

They've broken zero original next gen news that I know of, zero. Anything they've written on Wii U is just stolen/based on the original 01.net site so far. At least Cassamasina had sources and actually broke the Wii's tech info.

Yeah, not a blind IGN basher but they've been horrible in the next gen reporting department. They'll randomly write up almost any nonsensical rumor from the web from time to time and give it unfounded credibility.
 
Status
Not open for further replies.
Back
Top