Predict: The Next Generation Console Tech

Status
Not open for further replies.
But at this stage we can reasonably assume they are not powerful enough to run UE4 outright as opposed to a Kepler card. Don't get me wrong I hope the consoles are as powerful as they can be but my faith in them is not 100%.

I don't think we can assume anything regarding next gen consoles and UE4, we just don't know enough.

Seeing all the issues Epic ran into UE3 early this gen, it's equally possible they want to make sure everything is done and ready with UE4 before releasing it into the wild.

IMO we should wait for more info before assuming anything.

Charlie just released picture of four IMB Power7+ chips conneted over one interposer. :) This would be perfect for X720.

http://semiaccurate.com/2012/03/21/ibm-power-7-spotted-and-it-is-a-monster/

As it's already been said the chip looks huge, but I'd still like to see it in a console. :p
 
Last edited by a moderator:
Ok, the "solution" to the CPU problem:

20W for a 131.1mm^2 RISC 1024 CPU cores with a sustained 1.4TFLOPs.

;)
Another magical processor. If it was as easy as just whacking zillions of tiny cores in a chip, we'd be there already in mainstream architectures. 512KB L2 cache per core would be 512 MB for 1024 cores. That'd require about 2000 mm^2 at 32nm, right? ;)
 
Oh come on guys, just bolt them on right next to the 8 SPEs we are gonna bolt on, too! It will be a smorgasbord of CPU technology. With how small things things are we could even toss in a couple GPU arrays, too! ;)
 
Anyone knows if they can test the chips before putting them on the interposer?
If they can't, would yield be essentially the same as if it was a single huge chip?

There are some preliminary checks for defects, but the bulk of the testing has to wait until the silicon is connected.
The yield situation is more complex to calculate.

There is the individual yield of each chip, the yield of the interposer, and the probability of an error or fault in the attachment of each piece.
Then there is the question of how many bins are available that less than perfect modules can be sold as.

The yield is necessarily worse than a single chip that is the size of any single die.
The probability of a completely functional module is something like (Yield of one CPU^4)*(Yield of interposer)*(success rate of attach process^4)

There are number of things IBM has done, such as the power consumption, connectivity, and an apparently much more complex interposer that worsen various values in that equation.
If it weren't for the mountains of cash IBM makes from its high-end systems and software, it might care more about this.

It's still not as bad as a chip with 4x the area, one because such a chip is physically not possible with the fabrication methods being used, and because yield of a single chip gets very bad very vast at those sizes.
 
capturevxjwv.png


Source says "straight from Nvidia"
 
capturevxjwv.png


Source says "straight from Nvidia"

To quote myself from the Kepler thread:
Though as marketing slides often, it's misleading, GDC 2012 based on released screenshots was 1280x720 instead of 1920x1080, and used FXAA instead of MSAA
And one shouldn't forget either how the Epic guy commented in 2011 that they could probably optimize it to run on single GTX580 instead of 3
 
Without any reference to any changes (e.g. from MSAA to FXAA), not to mention the optimizations made during the last year of development, that slide doesn't mean much.
 
No cache coherency. Just no.

Due to my profession, I've seen a number of massively parallel processors with huge ALU capabilities and very marginal support in terms of other subsystems pop up and disappear. None ever got much support even in scientific circles. These days it is such an easy thing to do, and still such a godawful nightmare to actually produce anything useful with, much less achieve any kind of general applicability.
For the life of me, I can't see how these design exercises manage to get funding. Even the academic interest should be miniscule by now.
 
Without any reference to any changes (e.g. from MSAA to FXAA), not to mention the optimizations made during the last year of development, that slide doesn't mean much.

It proves that humans are still the best supply of BTUs for the Matrix MULs and lulz. ಠ_ಠ
 
20W for a 131.1mm^2 RISC 1024 CPU cores with a sustained 1.4TFLOPs.;)
It's starting to look pretty close to GPGPU. My wild guess is that there's probably very few cases, for console gaming applications, where a modern GPU architecture wouldn't be equal or better than a 1024 tiny cores CPU.
 
I don't think we can assume anything regarding next gen consoles and UE4, we just don't know enough.

Seeing all the issues Epic ran into UE3 early this gen, it's equally possible they want to make sure everything is done and ready with UE4 before releasing it into the wild.

IMO we should wait for more info before assuming anything.

Exactly. The first UE3 images date back to May 2004 and were probably running off dual 6800s. Gears of War, the first UE3 title, launched in November 2006. Following the same timeline, had Epic released info on UE4 at GDC it'd be late 2014 when the first game to utilize it hits the market. If next gen consoles arrive late 2013/early 2014, you're looking at UE3 for many launch window titles.
 
As ATI first major compute architecture, and considering the memory controller differences (size), I wouldn't say it is shocking. It does show ATI has a bit of room to make up for parity. On the other hand my guess is the Tahiti/7970 has *major* legroom for a revision with increased clocks. But that does not address the TDP differences where Nvidia, surprisingly, has really pulled ahead in delivering a GPU with a lower TDP and better performance. Part of this is undoubtedly related to the memory bus differences (powering a 384bit bus doesn't come free and if I have read the quips correctly Kepler/680 is 256bit) so I don't think this spells doom for AMD's architecture in general but it does show that there is room for improvement.

Edit: and not to be cynical, but in the PC space with the NV dev rel money hats I would not draw super hard conclusions about how a chip would perform in the console space. By all means it is very relevant in the PC space--NV has done a good job of getting their technology exploited and performing well in the PC space--but I am not sure it speaks conclusively about how good/bad GCN is as a DX11.1 architecture. That being said Kepler looks pretty amazing. I would be curious to see how much the power draw reduces with a frequency reduction. At ~170W it is too high for a console unless budgets change. That being said... wow... Kepler lays down the wood in Skyrim. OUCH.

... hmmm Ranger indicates it draws more power than a Tahiti per some reviews...
 
Guru3D has 680GTX benches and it's a clean sweep to its victory over 7979 bar Metro 2033. More interestingly the TDP is a shockingly low 173W, even lower than Pitcairn 7870 holy sh!t. And you know that means ladies and gents.
Here's the Gaf link with all the graphs.
http://www.neogaf.com/forum/showthread.php?t=459499&page=30

This is quite a bit offtopic for the thread, and the TDP isn't 173W, it's 195W, and the card in fact draws more power than 7970 in some games (at least Crysis 2 & Deus Ex by the looks of it)
 
Well after reading Hardware.fr and Anandtech I would say that AMD has indeed the best architecture and pitcairn is still the most impressive card out there.
A lot of unbalanced review out there, especially once OC is taken in account and at high resolution (2560) the cards (7970 and 680) are a wash.
If Anything Kepler shows that depending on what manufacturers plans to do with the GPU (if it mostly graphic) clearly cutting in the "fat' is the way to go, with regard to AMD offering GCN may not be the best choice.

On the contrary of most people here I might not want a directx something compliant GPU for next gen, I don't give a shit for alot of the stuffs introduced along the road, like precision for IEEE compliance, etc, etc. I believe that there is a lot of fat to be remove in a custom design intended to games only. It should be possible to beat 7750 and 7870 perfs per watts and mm2.
 
Status
Not open for further replies.
Back
Top