Predict: The Next Generation Console Tech

Status
Not open for further replies.
I hope the game do get ported to PS4 at some point, along with the previous entries too. And yeah KZ4 should look nuts, I can imagine BF3 style graphics on steroid.

That infinite icon on the top left also looks eerily suspicious don't you think, with all the Xbox 8 rumor going around?
I can see things are getting interesting for the coming week:).

I think that's just the normal Visual Studio 11 icon.

If he's got the dev kit, why not just leak all the target specs?
 
Last edited by a moderator:
Btw, I also predict an nvidia GPU showing up as well, now with the AMD APU off the table.
PS4 will probably also use XDR2, as it's kind of tied to the new Cell processor.l

Clearly if you're going to use one part not in development, you might as well use two.
 
Clearly if you're going to use one part not in development, you might as well use two.

Well, you know what they say, if you are going to be wrong, you might as well be really really wrong. If you are going to jump off a cliff, you might as well take an anvil with you. Etc.
 
Well, just based on the rumors/hinting that Durango may use something in the Cape Verde Range. That's 7770 with 640 SP's or 7750 with 512. So a 768 SP Mars would be a nice upgrade without breaking the silicon bank.

It's low end for desktop, but talking about 1.7 billion transistors (Mars, 1.5B Cape Verde) vs 232m (+EDRAM scraps) in Xenos puts it in perspective as still a ginormous complexity leap.

Also, I believe Xenos parent die was ~180mm. Mars is 135mm according to that. Both would feature EDRAM. So to say, it's not a monumental difference. To go to the next step Oland would require a significant increase over Xenos budget.

First: how do you know they have eDRAM? Do you actually know stuff or are you frequently tossing out predictions as facts? Just curious because you cut of the discussion with these facts and if they are facts, great as there is no point chasing pointless rabbit trails. But if not...

Second: 205mm^2 (Xenos logic) versus 135mm^2 is a LOT. 52% more silicon real estate to be exact and based on area "sunk" into basic architecture (see above on these supposed scalings--you get more performance on additional space) that is going to be substantially faster (looking at the 40-60% range).

I guess the issue is you have chosen to select Cape Verde as a baseline and then anything about that is impressive, tasty, etc ;) To review:

G71 @ 90nm: 196mm^2
RSX @ 90nm: 240-258mm^2 (depending on source)
Xenos @ 90nm: 180mm^2 (Mintmaster)
eDRAM @ 90nm: 70mm^2 (~105m transistors, of which about 80m eDRAM and 25m ROP logic -- so 205mm^2 logic)

Cap Verde @ 28nm: 123mm^2
Pitcairn @ 28nm: 212mm^2
Oland @ 28nm: 235mm^2
Mars @ 28nm: 135mm^2

Some comparing in budget:

RSX: Cape Verde: -53%
RSX: Pitcairn: -18%
RSX: Oland: -9%
RSX: Mars: -48%

Xenos : Cape Verde: -40%
Xenos : Pitcairn: +3%
Xenos : Oland: +14%
Xenos : Mars: -35%

My premise has continued to be comparing to the relative real estate of last generation. Given that as a general ball park baseline (for all the reasons I have cited in this thread) the GPU's you mention (Cape Verde, this supposed Mars) are major, substantial drop offs from last generation in terms of space. They also far and away fall short of desktop discrete GPUs from 4-5 years *before* the launch of the new consoles.

Sure, if this was 2010 maybe Cape Verde would be a nice upgrade, but we are talking about 2013+. And there is no other way to put this: putting a GPU that is 40-50% smaller than RSX and Xenos is making a HUGE cut in silicon real estate. And if we are to buy the major push in the GPU industry of GPGPU which supposedly it taking up more tasks something like a CELL processors does the GPU will be asked to do more with less total space.

There is nothing exciting about this. Heck, we will be seeing AMD/Intel SoCs in the next couple years with better than console performance.

Also I don't see 28nm as an excuse. It isn't news about node transitions but there is always someone whining (in this case NV). 28nm, which had select parts rolling out in the late 2011, is going to be very mature by late 2013. Layout, yield, power, and pricing are all going to be at a more mature point than 90nm in 2005.

If MS/Sony are going with chips that are nearly 50% smaller it has a lot to do with their visions for their platforms (read: general entertainment devices that are equally focused on core gaming, casual gaming, new peripheral gaming, media streaming, service provider, digital distribution, etc) instead of a core gaming experience first (which has tens of millions of customers out there) with those other features coming along as a robust package. I have no problem with that, but I also don't think it should be sugar coated against low end hardware like Cape Verda -- or to throw out "Kinect 2 will be so much more accurate and default hardware so it will work with core" all along pretending that the basics, e.g. it lacks proper input for core genres (like FPS, driving, sports) that regardless of precision Kinect is a total non-starter in an FPS or the like because you cannot move. The only solution is rails which is a huge step back. Which may appeal to casuals but, again, is a major assault and demand for core gamer concession.

TL;DR: Cape Verda/Mars are a huge reduction in hardware from last gen. HUGE.
 
I don't think die size is a particularly interesting way of looking at these designs. It really comes down to how many transistors they can power under something like 200 Watts. Sure, you can fit way more transisors into 200mm^2 now, but could you power and cool that? PCs are an oddball because enthusiasts seem to have little concern for power supply requirements that force extreme cooling, like multitudes of fans or liquid cooling in a large case. Consoles are diffferent. They need to be smallish, quiet and consume relatively little power. Thermal performance/power will always be thee starting point of the design. You reallyy can't compare to a PC.
 
I don't think die size is a particularly interesting way of looking at these designs. It really comes down to how many transistors they can power under something like 200 Watts. Sure, you can fit way more transisors into 200mm^2 now, but could you power and cool that? PCs are an oddball because enthusiasts seem to have little concern for power supply requirements that force extreme cooling, like multitudes of fans or liquid cooling in a large case. Consoles are diffferent. They need to be smallish, quiet and consume relatively little power. Thermal performance/power will always be thee starting point of the design. You reallyy can't compare to a PC.

Given Acert93's lists (thanks for those btw), what kind of wattage would we be looking at for the same real estate as last gen? What kind of physical size and cooling requirements would that have? I really think total wattage is not as big a concern unless you are talking about heatloads that are going to require boxes that are too large and/or too expensive to cool. Consumers don't really seem to care. Anecdotal case in point - my stepfather buying a new tv who started out enamored of LED tv's. One look at the actual price difference vs. savings and he dumped that real fast. No one really seem to look at something like that. Physical size doesn't seem to matter a whole lot. My 360 (not a slim) is actually smaller than our cable box/DVR(comcast.) It's far smaller than the 5.1 Receiver. Is the same square mm going to cost that much more to power and cool this coming gen?
 
I don't think die size is a particularly interesting way of looking at these designs. It really comes down to how many transistors they can power under something like 200 Watts.
Given the current gen of AMD video cards, it pretty much doesn't matter if you base your speculation on die size or power, though.

Be it ~200mm² or <150W TDP (i.e. ~100W typical power draw) - both of those premises point towards a GPU in the broad range of HD7850, i.e. a slightly castrated and underclocked Pitcairn. Curiously enough, that's what most of the more credible rumors suggest, too.
 
Given the current gen of AMD video cards, it pretty much doesn't matter if you base your speculation on die size or power, though.

Be it ~200mm² or <150W TDP (i.e. ~100W typical power draw) - both of those premises point towards a GPU in the broad range of HD7850, i.e. a slightly castrated and underclocked Pitcairn. Curiously enough, that's what most of the more credible rumors suggest, too.

Most rumors are pointing towards a 1.1-1.8TF chip for both consoles which, in my opinion, is too low. Pitcairn XT is significantly faster than either at 2.5TF and is roughly the same size as Xenos @ 90nm.

Pitcairn is far more efficient per transistor than Xenos was and cost roughly the same to manufacture. Why wouldn't they choose Pitcairn XT on a very mature 28nm process or a slightly more powerful chip(Oland) in time for Fall 2013 assuming the arbitrary, 200w barrier.

My personal take on the matter is that we are on the cusp of a computing revolution with chip stacking, 3D transistors, quantum computing, mobile computing, and graphite transistors instead of silicon, all happening during a time when the consoles are supposed to last until 2020. I think console makers should bring it all, full stop.
 
Scott_Arm said:
I don't think die size is a particularly interesting way of looking at these designs. It really comes down to how many transistors they can power under something like 200 Watts. Sure, you can fit way more transisors into 200mm^2 now, but could you power and cool that? PCs are an oddball because enthusiasts seem to have little concern for power supply requirements that force extreme cooling, like multitudes of fans or liquid cooling in a large case. Consoles are diffferent. They need to be smallish, quiet and consume relatively little power. Thermal performance/power will always be thee starting point of the design. You reallyy can't compare to a PC.

A couple dozen pages back I had some various numbers showing the scaling of various GPUs with frequency. Architecture and frequency play big roles. The obvious is moving up 10% in clocks may boost TDP up 25%. But a high clocked 130mm^2 GPU may dissipate as much heat as a more reasonably clocked 210mm^2 with the same architecture, even though it would seem having so many more transistors would be cause the reverse. So a smaller or fewer transistors isn't a guarantee.

Interestingly enough undervolted Pitcairns with disabled units (7850s) with their power hungry GDDR5 was 105W iirc and Pitcairn proper as low as 130W. If we are picking 200W as an absolute "wall" on TDP (not sure that is fair but that uses last gen as a baseline) 105W for a 212mm^2 GPU & PCB and a boatload of GDDR5 working with a PCIe bus leaves a lot of room for a CPU (which should not be above 50W anyways), wireless chips, HDDs, optical, etc. In fact such a setup would be right at or below the high end of this generation. I would hope that 8 years of cooling design advances (read: not putting in the absolutely cheapest cooling solution) offers a lot of options for moving forward. Especially if a HiFi format is chosen as some rumors indicate...
 
Why wouldn't they choose Pitcairn XT on a very mature 28nm process or a slightly more powerful chip(Oland) in time for Fall 2013 assuming the arbitrary, 200w barrier.
Any more than ~200W of actual power draw for the entire system will be very hard to dissipate within reasonable noise levels - even with fancy (i.e. expensive) cooling solutions. Launch XBOX360 typically pulled ~175W from the wall, launch PS3 ~190W. And those machines didn't come with light-weight cooling solutions. Factor in that no more than ~80% of the actual power draw will go past the PSU due to efficiency constrains - and you end up with some ~150-160W of actual power budget to run the entire system.

Consequently, any customized equivalent to a retail video card that has more than one PCIe adapter is completely out of the question for home consoles, imo. Half of the overall power budget, i.e. around 80W of actual power draw for the GPU part should be a rather good guess - give or take a few watts depending on the RAM configuration and possible emphasis on either CPU or GPU.

Pitcairn XT would certainly be stretching that budget A LOT - even with the benefit of unified RAM.
 
I notice quite a lot of "mm2" or transistor numbers throwing around. Why is that?
"more is better"

It is popular belief that the RSX pales in comparison to the Xenos.
But the transistor numbers and the "mm2"-s are in favor of the RSX.

ontopic:
I am really curious as to what Nvidia has up it's sleeve.
I know they are developing a high end, low powered GPU, but that can't be exclusive to a steam box right?

Nvidia will also try to make deals. If they are not in either MS, Sony or Nintendo, I am sure their stocks will drop.
Plus the declining PC market would not help either,

So yeah, it is in their best interest to license their new technology to Sony.

That is why PS4 will probably have an Nvidia GPU as well IMO
 
A couple dozen pages back I had some various numbers showing the scaling of various GPUs with frequency. Architecture and frequency play big roles. The obvious is moving up 10% in clocks may boost TDP up 25%. But a high clocked 130mm^2 GPU may dissipate as much heat as a more reasonably clocked 210mm^2 with the same architecture, even though it would seem having so many more transistors would be cause the reverse. So a smaller or fewer transistors isn't a guarantee.

Interestingly enough undervolted Pitcairns with disabled units (7850s) with their power hungry GDDR5 was 105W iirc and Pitcairn proper as low as 130W. If we are picking 200W as an absolute "wall" on TDP (not sure that is fair but that uses last gen as a baseline) 105W for a 212mm^2 GPU & PCB and a boatload of GDDR5 working with a PCIe bus leaves a lot of room for a CPU (which should not be above 50W anyways), wireless chips, HDDs, optical, etc. In fact such a setup would be right at or below the high end of this generation. I would hope that 8 years of cooling design advances (read: not putting in the absolutely cheapest cooling solution) offers a lot of options for moving forward. Especially if a HiFi format is chosen as some rumors indicate...

Yeah, if you adjust the voltage of your processor, it's going to change the power it's drawing and the heat it's dissipating. I still don't think that makes die size a reasonable starting point for silicon budgets. I think they're more likely to target $ and TDP and see what kind of processing power they can get that falls into that budget. It may end up being a large die, but may end up being more transistors in a smaller die size instead.
 
Yeah, if you adjust the voltage of your processor, it's going to change the power it's drawing and the heat it's dissipating. I still don't think that makes die size a reasonable starting point for silicon budgets. I think they're more likely to target $ and TDP and see what kind of processing power they can get that falls into that budget. It may end up being a large die, but may end up being more transistors in a smaller die size instead.

I agree that TDP at this point is the barrier most likely to be hit first before die size or price.

My assumption, based on a number of posts many pages ago, was that the AMD GCN and the Nvidia Kepler architectures can produce a workable TDP on the budgets last generation had.

e.g. a 7950 (Tahiti with disabled units) has almost the same max TDP as the 7870 (full Pitcairn) but in most cases the 7950 is a bit faster. The raw specs are in it favor (+12% shader, +32% textel, +56% bandwidth, +50% bus size, +50% framebuffer, etc) with only fill rate being less. The big difference is the 7950 is clocked at 800Mhz instead of 1000Mhz. So the larger die (more functional units) clocked lower turns out more performance/W. And this when you take into account the extra power required for more, faster memory on a larger bus (my understanding is that memory controllers can soak up a lot of power to power the traces).

Seeing people get Pitcairn under 100W causes me to pause because that isn't too far from the 80W Cape Verde. And lets be frank, the people who are holding last gen TDP as the most strict limit would also have to concede Cape Verde is also "too much."

Personally, I fully expect a custom designed console chip. This means AMD (probably not NV) will sit down with MS and they are going to make educated projections. They will itemize what is most important and then look at the power and performance of x number ROPs, x number TMUs, x number Shaders, x number raster engines, etc. and weigh the cost / benefit of each unit and what is a baseline need for 720p. Based on what we are seeing from chips nowadays I think a bigger chip clocked lower is an easier path than a smaller, higher clocked chip (I think that much is obvious). I also think this is why people should not be poo-pooing DDR4. Besides the long term cost benefit (more expensive than DDR3 at launch but much cheaper over the lifetime; about the quarter of the cost of GDDR5) is that DDR4 also offers a nice downward bump in voltage over DDR3. This may also be why some are looking at it as a complete replacement for GDDR5 (cost and, of course, power). A move away from GDDR5 opens up more possible power elsewhere (bang for buck). This is also why I think some stacked memory or eDRAM will be likely to provide a "cheaper" (in terms of TDP) option to have a high bandwidth reservoir for high bandwidth clients.

For these and other reasons I don't see an off the shelf part and I don't see a smaller chip clocked higher like Cape Verde. Of course whatever the chip may be may be very much like Cape Verde in performance (larger but clocked lower with similar end result) but if that is the case, with Cape Verde having 70GB/s bandwidth on the high end, there may be no need at all for GDDR5, eDRAM, or stacked memory. Cape Verde screams, "DDR4 on a wide bus" with a justification that most games will target 720p. I fail to see the merits of the arguement to go with a Cape Verde class GPU and then try to get it 100GB/s bandwidth unless they are doing a Xenos approach which, quite frankly, in 2013 and the advances in post processing and the issues eDRAM presents a "dumb" full rate eDRAM seems like a massive misappropriation of silicon and budgets IMO. A number of developers have been open about stating how they would have preferred the eDRAM budget be dedicted top the GPU proper and I think that would be the major conclusion if MS presented a Cape Verde+Xenos eDRAM setup.
 
In the PS4 leaked spec targets they said that the final specs would be 10X the PS3

Radeon HD 7970 is actually about 10X the RSX in GFLOPS & that APU with the Tahiti GPU is 1.84 TFLOPS that's about 10X The Cell in GFLOPS.


So could it be a small chance that the part about the 7970 wasn't just lost in translation?

I know it looks way over the top but if the part about the final specs being 10X PS3 is true the CPU would have to be about 2 TFLOPS & the GPU would be about 4TFLOPS & it would probably burn the house down lol.
 
In the PS4 leaked spec targets they said that the final specs would be 10X the PS3

Radeon HD 7970 is actually about 10X the RSX in GFLOPS & that APU with the Tahiti GPU is 1.84 TFLOPS that's about 10X The Cell in GFLOPS.


So could it be a small chance that the part about the 7970 wasn't just lost in translation?

I know it looks way over the top but if the part about the final specs being 10X PS3 is true the CPU would have to be about 2 TFLOPS & the GPU would be about 4TFLOPS & it would probably burn the house down lol.

Back when the target specs came out, Tahiti was the only publicly known name tied with the 7000 series so it was most likely used to help identify what the GPU architecture would resemble in PS4's GPU, not the raw processing power since they made sure to specifically mentioned 18 CUs.
 
Back when the target specs came out, Tahiti was the only publicly known name tied with the 7000 series so it was most likely used to help identify what the GPU architecture would resemble in PS4's GPU, not the raw processing power since they made sure to specifically mentioned 18 CUs.

Yes I got that I'm just playing with the fact that they said that the final specs would be 10X the PS3.


Cell is 204 GFLOPS in the PS3 using the 6 SPEs & RSX is 400 GFLOPS. ( well that's what Wikipedia said)

If the final specs 10X PS3 part is to be believed then that would mean that the CPU would have to be around 2 TFLOPS which is about the same specs as the APU with the 1.8 TFLOPS GPU.

& 10X the RSX 400 GFLOPS is 4 TFLOPS which is about the same as the Radeon HD 7970 3.79 TFLOPS.


I know all this is crazy but that's what 10X PS3 is in my book, maybe the person who wrote 10X PS3 has a different way of calculating 10X PS3?

Edit: does anyone have the real RSX specs? 400 GFLOPS seem to be fake.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top