Predict: The Next Generation Console Tech

Status
Not open for further replies.
Did the PC version of the first game even mean much beyond rendering resolution? I wouldn't expect them to go the extra distance for WiiU if they don't do the same for PC.

At the end of the day, it's just another on the list of platforms to support and to do it as inexpensively as possible.
 
http://www.gamereactor.se/nyheter/43021/GRTV:+Darksiders+II-intervju/

Vigil says Wii U hardware "on par" with current-gen. No enhancements to the port.. are they artificially limiting it to 30fps too? This is just sad for a new console

I would not be the slightest surprised if the wii U games end up being a full 720p 30 fps versions of current gen games (which a lot of them are sub HD) with very subtle difficult to notice enhancements of some graphical effects.

that is logical considering the law of decreasing returns to scale, the more technology advances the more difficult it would be to improve it further and impress gamers. thats why I prefer to wait until fall 2014 for the release of next gen consoles.
 
There's far, far too many devs saying it's faster than current gen to take Vigil even remotely seriously

You also have to factor in the developer's position. Is he an engineer that has had plenty of hands on with the engine or is he more of a director or producer?
 
Discuss: Relevance of GPU compute for next gen consoles.

I note this because it seems one of the areas NV was able to cut power and die space while pumping up performance/mm^2 was they drastically reduced their full rate DP performance to 1/24th speed SP. This (and other?) changes has affected a number of compute benchmarks where the GK104, which runs all over the competition, falls pretty far behind in some situations. Compute is being used in games right now (e.g. some AO approaches) and a lot of the stuff the Cell does in the PS3 that allows the PS3 to pull off effects not viable on RSX is compute-like and significant post process. Likewise AMD showed the Leo demo which is a forward renderer but used compute to cull lighting to drastically improve performance with many lights which allowed the more indepth shader complexity of a forward renderer but increase the number of lights to be more competitive with a deferred approach. With the issues of developers properly getting MSAA to work with deferred engines (especially when they miss stuff, like forgetting to AA their tone maps) and increased performance cost on a deferred engine it does seem that compute, in some situations, is an important part of new graphics hardware.

So how important?

(I was going to frame this in the context of the faster/mm^2 and power of Kepler GK104 versus GCN but it seems Pitcairn and Tahiti really had different metrics with Pitcairn being quite good at compute as well, relatively, and against Tahiti it seems to be much, much more efficient per-mm^2, so much so some people are suggesting Tahiti may have some bottlenecks in the design.)
 
GPGPU is definitely important in games and consoles but I'd dare to say that DP is pretty much as useful as x87 would be. As long as programmability and SP throughput is good it's good enough for games.
 
Discuss: Relevance of GPU compute for next gen consoles.

I note this because it seems one of the areas NV was able to cut power and die space while pumping up performance/mm^2 was they drastically reduced their full rate DP performance to 1/24th speed SP. This (and other?) changes has affected a number of compute benchmarks where the GK104, which runs all over the competition, falls pretty far behind in some situations. Compute is being used in games right now (e.g. some AO approaches) and a lot of the stuff the Cell does in the PS3 that allows the PS3 to pull off effects not viable on RSX is compute-like and significant post process. Likewise AMD showed the Leo demo which is a forward renderer but used compute to cull lighting to drastically improve performance with many lights which allowed the more indepth shader complexity of a forward renderer but increase the number of lights to be more competitive with a deferred approach. With the issues of developers properly getting MSAA to work with deferred engines (especially when they miss stuff, like forgetting to AA their tone maps) and increased performance cost on a deferred engine it does seem that compute, in some situations, is an important part of new graphics hardware.

So how important?

(I was going to frame this in the context of the faster/mm^2 and power of Kepler GK104 versus GCN but it seems Pitcairn and Tahiti really had different metrics with Pitcairn being quite good at compute as well, relatively, and against Tahiti it seems to be much, much more efficient per-mm^2, so much so some people are suggesting Tahiti may have some bottlenecks in the design.)
Just to speak of Kepler it looks to me like a victory (and if we don't look at techreport results) a la Ciceron so paid at a high price.
They significantly lower the complexity of some part of their chip to compete with AMD. They managed to catch up with Tahiti but Tahiti doesn't scale that well vs say pitcairn.
Amd has still somehow scaling issue (or may be it simply needs more ROPS to have a clear win at really high resolutions) in the high end, Nvidia not that much.

It's a double edge weapon, if they come with 3/4 or 2/3 of a Gk104 the chip will likely perform close to 3/4 or 2/3 of a Gk104 which is likely less than pitcairn.

Overall especially after reading the tech report reviews that AMD is in fact "winning" (on a tech POV only though...). They have great density, great compute performances, they moved to dynamic scheduling, etc. Their last GPUs seems to have less hiccups tham Nv counter parts (see the 99th percentile for techreport). Overall to me it look like AMd is succesfully growing its architecture. They may move to more complex scheduling and their take on Nvida GPC soon.
 
Last edited by a moderator:
Did the PC version of the first game even mean much beyond rendering resolution? I wouldn't expect them to go the extra distance for WiiU if they don't do the same for PC.

At the end of the day, it's just another on the list of platforms to support and to do it as inexpensively as possible.

But he specifically said the Wii U hardware, not just the Wii U version of the game, was on par with PS360.

I dont think it precludes the Wii U from being 1.5X 360, but it probably precludes it from being 4X.

Sometimes I hate being right so often :p
 
moved to the front, please answer B3D:

On Wii U, is it likely the IBM CPU and the AMD GPU will *each* have their own on-die or off-die EDRAM, or will it been one unified pool.?


Thanks in advance.
 
But he specifically said the Wii U hardware, not just the Wii U version of the game, was on par with PS360.

I dont think it precludes the Wii U from being 1.5X 360, but it probably precludes it from being 4X.

Sometimes I hate being right so often :p

My guess is it will need to be at least 1.5x because it also has the tablet graphics to generate.

moved to the front, please answer B3D:

On Wii U, is it likely the IBM CPU and the AMD GPU will *each* have their own on-die or off-die EDRAM, or will it been one unified pool.?


Thanks in advance.

Not sure anyone can answer with an educated guess.

If the design ends up being a SoC then maybe they could share eDRAM but even AMD's own SoC's aren't quite there yet at fine grained sharing. An IBM with eDRAM is possible as they are already pushing out products with eDRAM on-die. AMD via ATI and ArtX has experience with eDRAM.

Since they are different vendors and Nintendo is trying to streamline the design trying to create a share eDRAM pool has challenges on the design side and software side (allocation) so it seems much simpler to have the PPC with whatever eDRAM it may have (which if it is a smaller/slower CPU you would hope it wasn't cutting out real cache for eDRAM) and on the GPU side they were doing whatever best fit the design. A Xenos style setup probably isn't idea and going with enough GDDR3 would seem wise, but maybe they don't have big enough chips to justify such a large bus so eDRAM is necessary.
 
In my opinion, there's no benefit to having a small pool of EDRAM when GDDR5 provides enough bandwidth. 4xMSAA only needs 512Mb of frame buffer for a 1080p image.(I'm not sure if developers have a MSAA workaround for deferred rendering)

My prediction:

Underclocked & Volted Tahiti Pro(1536SPs - </=125w)
Some quad(Either Zambezi-based or Power7-based)
2-3Gb of GDDR5

Given the way Kepler performs(195w), what are the odds of Microsoft using this chip? Better performance:watt than Tahiti.
 
Given the way Kepler performs(195w), what are the odds of Microsoft using this chip? Better performance:watt than Tahiti.

And yet pitcairn is ~25% better/watt than kepler. Tahiti really isn't a likely mark simply because it carries baggage that needs to be trimmed. You're not going to see 384bit, I doubt you're going to see Dual Precision.

And I should add that I expect none of the above, it'll be some architecture we haven't seen yet, but I expect it's going to be something more akin to pitcairn than tahiti.
 
Yes if it's actually coming out in 2013 or 2014 we will see 1-3 more GPU generations before release. I dont see the point in locking in a March 2012 GPU for something releasing in November 2013 or possibly even November 2014.

I think Pitcairn is close enough that Rein would consider it Kepler-like performance. Personally I see almost no reason, neither power or cost, Pitcairn like performance wont be the floor of what we're looking at, which is quite enticing.

Rein really nailed it in one of his billion interviews, the thing consoles do well above all else is push out a high level of graphical performance to the masses, compared to mobile or as he said "an economical computer". And by economical he meant what the masses buy at Dell or Best Buy. The consoles cannot afford to lose what they do well, their niche, or they will perish.
 
I rarely hear of EDRAM for the CPU. I assume it would only be for the GPU.

IBM is using it for their Power 7 architecture. The benefit is it is much denser than SRAM so you get huge area and I believe power savings. I think you will begin hearing much more about eDRAM on CPU because of the core count. It often isn't enough to have a large L2 and then system memory. The problem is for your L3 cache if you use SRAM it is going to be large. Since you are going to design L3 to be denser/slower anyways going with eDRAM on die allows for more density at the cost of speed. But a couple handfuls of cycles slower, but more space, is a fair trade off when you are looking at hundreds of cycles to go to system memory.

With all the talk of the importance of memory hierarchies and the use of not just L3 but L4 level caches I think we will hear more and more about eDRAM on CPUs until another technology can supplant it. As long as it is a choice between SRAM and eDRAM the eDRAM makes sense in some situations.

In my opinion, there's no benefit to having a small pool of EDRAM when GDDR5 provides enough bandwidth. 4xMSAA only needs 512Mb of frame buffer for a 1080p image.(I'm not sure if developers have a MSAA workaround for deferred rendering)

When you are limited to a 128bit bus but are targeting Pitcairn or even higher Tahiti/Kepler GK104 class performance it isn't enough. Even at a 256bit bus using non-bleeding edge GDDR5 (i.e. 4GHz instead of the 5.5GHz) the system isn't so bandwidth rich anymore. I am not saying eDRAM is the solution (in the above context with Megadrive the discussion with the WiiU where more certainly a wide bus won't fit on a smaller GPU) but it is a lot to ask if CPU and GPU are both going to have wide buses.

And I should add that I expect none of the above, it'll be some architecture we haven't seen yet, but I expect it's going to be something more akin to pitcairn than tahiti.

If we are looking at 2013 I think it is almost certain we will see something different from GCN (even if just a revision) and if 2014, which holiday 2014 over 2.5 years away, we are looking at AMD/NV having their next full architecture already out. And for memory, unless something drastic happens (2012 or early 2013 launch, one or both of MS/Sony cheap out) I think we will see a shift to a different memory architecture other than GDDR5. The potential to ramp up bandwidth and reduce the power draw of the memory controller and the traces (e.g. by using an interposer) I think is just too tempting to ignore. I cannot even imagine where GDDRx will be in 2014 and the performance/power requirements to build around it. It is fine in the PC space where those concerns are more pliable but it is even becoming a dead end there. And as for the GPU Pitcairn really does look like the performance/cost (power, die space) consoles are looking at. Tahiti is about 50% larger than RSX while Pitcairn is a good 10%+ smaller. But Tahiti doesn't justify its size and power requirements by a 1:1 scaling. Kepler is a little easier to swallow but as reviews have trickled in it has become apparent that Tahiti isn't so bad off in comparison and that certain things, like compute, GCN really has a brighter future. I would think, and hope, that compute (and maybe designing a console to leverage such) would be a design consideration for a platform that is going to last 7 years. Maybe compute will be hard but when devs are looking at year 4-7 knowing there is a resource there to tap is better than a dead end.
 
Status
Not open for further replies.
Back
Top