Predict: The Next Generation Console Tech

Status
Not open for further replies.
Imo something like that is fairly typical example of a brain fart. Just a little misfire. Even with the large time gap between those two cards, they were both top end nVidia GPUs causing similar associations etc., and the focus wasn't on the model, but on the amount of cards, that's exactly the type of moment when errors like that happen.

http://www.geforce.com/News/article...dia-talk-samaritan-and-the-future-of-graphics

Not Rein, but Martin Mittring: Senior Graphics Architect at Epic Games.

Honestly I would rather see that kind of info come from someone like Mittring. But it's like they pointed out "Samaritan doesn’t include artificial intelligence and other overheads of an actual, on-market game". Don't get me wrong. I think next gen consoles will have very nice looking games, but I see it achieved more from developer creativity than console hardware power.
 
...I think next gen consoles will have very nice looking games, but I see it achieved more from developer creativity than console hardware power.

Those two go hand in hand. Without better hardware, there's only so much creatively one can do and at this point, I think devs have got all they can from this gen. Going into nextgen with roughly the same spec will lead to ... lack of creativity.

____________

Interesting coincidence with the Pitcairn info finally in the wild we see a competent chip weighing in at 212mm2, 100-130w, and 2.8b transistors ... Where did I hear that number before? :p

Assuming 32/28nm launch in 2012 would yield 8x trans count, this would amount to a budget of roughly 4billion (497m x8 = 3,976m) if we are to assume equal budget/process node.

This leads to some pretty interesting potential hardware:

With that budget, MS could extend the xb360 architecture to the following:

10MB EDRam (100m) => 60MB EDRam (600m) Enough for a full 1080p frame buffer with 4xaa

3 core xcpu (165m) => 9 core xcpu (495m) - or an upgraded 6 core PPE with OoOe and larger cache along with an ARM core (13m trans)

This leaves a hefty 2.8b trans available for xgpu...

http://forum.beyond3d.com/showpost.php?p=1599737&postcount=8292

I just hope someone sacks up and produces a decent console. Sony, MS, Nintendo, whoever.
 
I think there is no chance in hell of Epic investing tons of money to develop fancy new tech for UE4 if the specs of Sony/MS consoles were a not up to the task..

If they dont have devkits they know the ballpark
 
An AMD 78xx derivative GPU would likely be an excellent choice for a higher power console releasing in 2013. All they'd have to do would be likely cut the bus down to 128/192 bits to allow for future shrinking and reduce the number of memory chips required and they're good to go. 4-6 4Gb RAM chips ought to give them a very strong 2-3GB of system RAM which ought to be good enough for a next generation console. They could push it to 8-12 chips if they were to go with a clamshell design and that'd give them an obvious cost saving when 8Gb chips are released as well as double the potential memory on the same bus size.
 
I think there is no chance in hell of Epic investing tons of money to develop fancy new tech for UE4 if the specs of Sony/MS consoles were a not up to the task..

If they dont have devkits they know the ballpark

Eh ... agree and disagree.

Just because Sony/MS/Nintendo may not be pushing high spec doesn't mean there aren't others that may be.

We've heard rumblings of a Valve console, and Apple console (I know), and frankly, there are likely to be quite a few gamers such as myself that say: "if all your offering is a <125mm2 GPU in your nextgen box, then I'm going to PC gaming".

So even though UE4 might be gimped in hardware from MS/Sony/N, that doesn't mean they won't be demoing the top end on PC's and aiming for PC customers ... or using the Corvette (BF3) model of show and hype the uber experience, and sell the gimped one.
 
An AMD 78xx derivative GPU would likely be an excellent choice for a higher power console releasing in 2013. All they'd have to do would be likely cut the bus down to 128/192 bits to allow for future shrinking and reduce the number of memory chips required and they're good to go. 4-6 4Gb RAM chips ought to give them a very strong 2-3GB of system RAM which ought to be good enough for a next generation console. They could push it to 8-12 chips if they were to go with a clamshell design and that'd give them an obvious cost saving when 8Gb chips are released as well as double the potential memory on the same bus size.

Either that or go with XDR2 ...
 
Either that or go with XDR2 ...

XDR2 doesn't actually exist anywhere but in peoples imaginations... There aren't even samples of it as far as I am aware. If anything it'll be some kind of DDR or GDDR variant in the next generation consoles due to economies of scale and if they can take advantage of chip stacking I doubt that'd apply to XDR2 nearly to the same extent that it'd apply to say DDR4 for instance.
 
It is pretty odd there are no plans mentioned yet for a GDDR derivative of DDR4 (16-bit prefetch). Not even a hint. Maybe they're running into a lot of problems.
 
Could it just be graphics companies are reserved to the fate of stacked memory and interposers? Seeing as AMD had some sort of sample they showed SA in late 2011 indicates they are seriously looking at addressing memory issues outside of the traditional format.
 
Honestly I would rather see that kind of info come from someone like Mittring. But it's like they pointed out "Samaritan doesn’t include artificial intelligence and other overheads of an actual, on-market game". Don't get me wrong. I think next gen consoles will have very nice looking games, but I see it achieved more from developer creativity than console hardware power.

CPU says hello...
 
Haha, I dont think you understand optimization at all.

Regardless, it does not say "only nvidia flops" on Epic's slide, period.

:rolleyes:

You are the one that does not understand.
The simple fact is that the AMD GPU performance is not being held back anymore by the PC compared to nvidia's GPUs!

Internally AMD GPUs can not make use of the high peak floating point speed due to a number of factors, Nvidias GPUs on the on the other hand have a lower peak but the sustained performance is better.

If AMD could have pushed the floating point performance up without killing other things performance, they would have!

To think that AMDs GPUs have a huge increase in FP power just waiting to be unlocked by use in a console is naive.
 
Last edited by a moderator:
Those two go hand in hand. Without better hardware, there's only so much creatively one can do and at this point, I think devs have got all they can from this gen. Going into nextgen with roughly the same spec will lead to ... lack of creativity.

I agree. To reach "that level" though I expect developer "tricks" to fill the void that the power can't cover on it's own.

CPU says hello...

I had a small brain fart. ;)

Anyway just like I said I'd need to see 1080p Samaritan one AMD GPU to believe it, I'd apply the same to needing to see Epic get that down to one 580 before I believe they could do it. And as I said almost a year later and that hasn't happened.
 
Internally AMD GPUs can not make use of the high peak floating point speed due to a number of factors, Nvidias GPUs on the on the other hand have a lower peak but the sustained performance is better.

Workload conditions (compute versus rendering) have a lot to say about this. The needs for many compute problems are quite different from a raster problem. A VLIW4 design at say 28nm 200mm^2 versus a GCN/Fermi on the same process size would see quite different performance (the former probably being better for most games up to this point as it has a high overlap for utilization, the later better for future work where compute becomes an important aspect and where previous GPU designs were not robust enough to maintain performance in these specialized programs).
 
Bitcoin mining is a nice "comparison" between the flops of Nvidia and AMD... at least it was, half a year ago, as I am not sure of todays numbers. Back then, even the highest end NVidia GPUs couldn't hold a candle against my meager 5650 in my laptop. Though, the calculations needed for Bitcoin mining probably do favor AMD to a large margin, hence they are much faster.

Does this translate to better gaming performance? Hell no, but neither is a motorcycle a good choice for a Stock Car Race...
 
Bitcoin mining is a nice "comparison" between the flops of Nvidia and AMD... at least it was, half a year ago, as I am not sure of todays numbers. Back then, even the highest end NVidia GPUs couldn't hold a candle against my meager 5650 in my laptop. Though, the calculations needed for Bitcoin mining probably do favor AMD to a large margin, hence they are much faster.

Was that due to speed or performance/watt/$?
 
http://www.tomshardware.com/news/patent-microsoft-3d-mouse-gyroscope,14878.html

MS granted a patent filed in 2006 for 3D mouse.

I had deposited a long time ago prior to the WiiU I thought MS/Sony would either have a screen on the controller (not a wild prediction considering the Dreamcast kind of did this years ago) or that we could see a quasi Move-Classic controller where the wands "break out" essentially a normal pronged controller that could be separated for 3D wands. I am pretty curious at this point what MS and Sony will come up with. Personally a Kinect like camera with a traditional/breakout Move controller would pretty much fill a huge array of input scenarios.

This is what i've been papping on about for eons!

It'd cover all the traditional and motion control bases and would basically enable more inventive gameplay out of the box simply by being included in with every console.

Sony... MS... make it so number one!

:Jean-Lucpickardface:
 
Status
Not open for further replies.
Back
Top