Predict: The Next Generation Console Tech

Status
Not open for further replies.
Biggest problem with edram is how to connect it to the GPU over fast enough bus for it to make any sense. I'd dare to say anything less than 64GB/s would be a waste and double that would probably be a good solution. Stacking could solve the problem but won't be easy or cheap.
 
Oh good grief, even GAF stoops to a new low. That spec sheet is courtesy of Texan as he was known on B3D.http://s3gal3aks.wordpress.com/2011/11/16/world-exclusive/
I find that implausible. Wheres the mention of Naomi 3?

Actually, the 100 mb of eDRAM is not a craxy number.
It's doable, but would it be useable? It's way more than needed for two 1080p buffers. It'd have to take on a new role as working space like PS2, which would be a step out of the ordinary. eDRAM in XB360 was there to enable a cheaper, slower main RAM bus. If they had 256 GB/s, eDRAM would be redundant unless they've decided RAM access is the bottleneck for their future renderers (while everyone else is looking at shader calcs as the bottlenecks).

I haven't done the maths, but 100 MB does look good for deferred rendering where you could store all buffers in the eDRAM and process them to your heart's content without messing up system BW. Still overkill, but that's one case I think that much could be used.
 
I'm not sure if this has been discussed before:
http://www.physorg.com/news/2011-11-calif-jury-rambus-antitrust.html
http://www.pcworld.com/article/244068/rambus_considers_antitrust_appeal_as_stock_falls.html
http://www.reuters.com/article/2011/11/17/us-rambus-micron-verdict-idUSTRE7AF1XL20111117

Essentially Rambus has lost tons of monies due to the antitrust case. Rambus was responsible for some of the HW found in PS3. My question is: what's the impact of Rambus valuation on PS4 HW plans? Does this make them "desperate" in need of a big deal and they're willing to sell inexpensive but powerful goods (obviously not below the cost of manufacturing - that'd kill them) or does this mean Sony (or MS) should not team with them for tech since it's dangerous? Anyone?
 
Essentially Rambus has lost tons of monies due to the antitrust case. Rambus was responsible for some of the HW found in PS3. My question is: what's the impact of Rambus valuation on PS4 HW plans? Does this make them "desperate" in need of a big deal and they're willing to sell inexpensive but powerful goods (obviously not below the cost of manufacturing - that'd kill them) or does this mean Sony (or MS) should not team with them for tech since it's dangerous? Anyone?

Rambus does not manufacture anything. They are a pure IP shop. If you want rambus memory, you need to get a license from rambus, and then buy the chips from someone who is willing to make them for you.
 
I reckon a Quad core CPU with SMT and performance roughly to a 2600K.

2-4Gb of total system memory

GPU roughly GTX570/6950 but die shrunk

When PS3 and 360 released they had GPU's that were level with the very highest end PC's but I can't see that happening next time round, if it did you would be looking at GPU power equal to or greater then a 6990/GTX590
 
Rambus does not manufacture anything. They are a pure IP shop. If you want rambus memory, you need to get a license from rambus, and then buy the chips from someone who is willing to make them for you.

True, my bad. Still the question remains: does this impact the future of their tech?
 
I reckon a Quad core CPU with SMT and performance roughly to a 2600K.

2-4Gb of total system memory

GPU roughly GTX570/6950 but die shrunk

When PS3 and 360 released they had GPU's that were level with the very highest end PC's but I can't see that happening next time round, if it did you would be looking at GPU power equal to or greater then a 6990/GTX590



When PS3 launched it did not have the highest-end GPU, Nvidia had launched the G80/8800, while RSX was a cut down G70.
 
When PS3 launched it did not have the highest-end GPU, Nvidia had launched the G80/8800, while RSX was a cut down G70.

8800GTX was released a few weeks after iirc

And still my point remains, PS3 had a GPU that was much faster then 90%+ of the worlds PC user base and had more grunt then all but the highest end of gaming PC's.

Can you see the same thing happening next gen? Because I can't.
 
8800GTX was released a few weeks after iirc

And still my point remains, PS3 had a GPU that was much faster then 90%+ of the worlds PC user base and had more grunt then all but the highest end of gaming PC's.

Can you see the same thing happening next gen? Because I can't.

Easily. The vast majority of GPUs sold today are weak integrated ones.

Cheers
 
8800GTX was released a few weeks after iirc

And still my point remains, PS3 had a GPU that was much faster then 90%+ of the worlds PC user base and had more grunt then all but the highest end of gaming PC's.

Can you see the same thing happening next gen? Because I can't.

According to Wiki/google 8800GTX was release Nov 6 and PS3 Nov 17.

Even before then I'm pretty sure they had G70's (?) clocked up to 650 mhz.

I think you'd have a better argument with Xenos in 2005, it seems on par with RSX, which is on par with 7800GTX, which was on par with X1800XT, the fastest PC cards of late 2005. Except for maybe a slightly lower clock.

But I agree with the general thrust of your argument, last time at least with Xenos we had something near highest end. I think this time it's pretty obvious we're going to get something from the mid-high ranks rather than the high-high, unless I'm surprised. The highest end PC GPU's seem to be much bigger and hotter today than they where then.
 
I reckon a Quad core CPU with SMT and performance roughly to a 2600K.

2-4Gb of total system memory

GPU roughly GTX570/6950 but die shrunk

When PS3 and 360 released they had GPU's that were level with the very highest end PC's but I can't see that happening next time round, if it did you would be looking at GPU power equal to or greater then a 6990/GTX590

This would be great:

Ivy Bridge Quad Core with HT @ 3GHz should have a TDP of ~50W (i7-2600S has 65W @ 2,8GHz @ 32nm)
2GB DDR3 Quad Channel
AMD Radeon HD7850 est. <=100W @28nm
1GB GDDR5

Should fit in a PS3 power envelope (even in that of the latest Slim) and would be a beast as an embedded system.

Also looking forward to Piledriver (especially power draw), maybe the rumored hexacore of the Nextbox is a 3 modul piledriver (AMD promotes this is a hexa core). Trinity has only 2 modules though IIRC and I am not sure about the number of shader units.
 
According to Wiki/google 8800GTX was release Nov 6 and PS3 Nov 17.

Even before then I'm pretty sure they had G70's (?) clocked up to 650 mhz.

I think you'd have a better argument with Xenos in 2005, it seems on par with RSX, which is on par with 7800GTX, which was on par with X1800XT, the fastest PC cards of late 2005. Except for maybe a slightly lower clock.

But I agree with the general thrust of your argument, last time at least with Xenos we had something near highest end. I think this time it's pretty obvious we're going to get something from the mid-high ranks rather than the high-high, unless I'm surprised. The highest end PC GPU's seem to be much bigger and hotter today than they where then.

Exactly, with die shrinks and taking power and heat into things even using an AMD 6950 would be out of reach.

Can see a 6850/6870 being much more realistic. There's 6850 out that don't even need a 6 pin PCIEX power connector as they they draw all there power from the PCIEX slot.

Perfect card for next generation as its power efficient and a good 4-5x more powerful then RSX?
 
Last edited by a moderator:
According to Wiki/google 8800GTX was release Nov 6 and PS3 Nov 17.

Even before then I'm pretty sure they had G70's (?) clocked up to 650 mhz.

I think you'd have a better argument with Xenos in 2005, it seems on par with RSX, which is on par with 7800GTX, which was on par with X1800XT, the fastest PC cards of late 2005. Except for maybe a slightly lower clock.

But I agree with the general thrust of your argument, last time at least with Xenos we had something near highest end. I think this time it's pretty obvious we're going to get something from the mid-high ranks rather than the high-high, unless I'm surprised. The highest end PC GPU's seem to be much bigger and hotter today than they where then.

If they take the highest end GPU of today and get rid of all the double-precision and other such do-dats that aren't necessary for a console GPU, my question will be how big and how hot would such a customised GPU part be? Enough to fit in a console box with a reasonable cooling solution?

What if the CPU is much smaller in terms of die area than what XCPU and CELL were at their launch? With a fixed console power envelope, and more of an emphasis on the GPU in terms of the overall system design, and with a heavily customised GPU part, then perhaps it could be possible for us to get a console in 2012 with the highest end (single) GPU of today?
 
Exactly, with die shrinks and taking power and heat into things even using an AMD 6950 would be out of reach.

Can see a 6850/6870 being much more realistic. There's 6850 out that don't even need a 6 pin PCIEX power connector as they they draw all there power from the PCIEX slot.

Perfect card for next generation as its power efficient and a good 4-5x more powerful then RSX?

Umm yeah but remember we're dealing with likely 2013 at the earliest (2012 if you really must, but I say no chance on that).

There will be a whole new generation of PC cards by then. They should hit in early 2012. Todays high end will be tomorrow's medium range, so I would certainly hope they can hit at least last gen's high end (6950-6970) if not better with decent thermals/power/etc.

If they take the highest end GPU of today and get rid of all the double-precision and other such do-dats that aren't necessary for a console GPU, my question will be how big and how hot would such a customised GPU part be? Enough to fit in a console box with a reasonable cooling solution?

I have no idea but I dont think it's very simple at all. It will either have to have been a custom part in development for a while, or I suspect something a lot closer to off the shelf. Architecting a GPU is an enormous undertaking by now...so you'll probably have to use whatever they have for desktop as a base even if parts of it arent needed on a console...that's just my hunch.
 
Exactly, with die shrinks and taking power and heat into things even using an AMD 6950 would be out of reach.

Can see a 6850/6870 being much more realistic. There's 6850 out that don't even need a 6 pin PCIEX power connector as they they draw all there power from the PCIEX slot.

Perfect card for next generation as its power efficient and a good 4-5x more powerful then RSX?

Really? The PCIE slot is rated for 75 watts. I can't see any 6850 being that efficient. My 6870 actually needs two plugs, as it's rated for 151 watts... one watt above 1 6pin plug. My G80 beforehand only needed 1 plug for its 148 watts. I can't see a 6850 without one.
 
If they take the highest end GPU of today and get rid of all the double-precision and other such do-dats that aren't necessary for a console GPU, my question will be how big and how hot would such a customised GPU part be? Enough to fit in a console box with a reasonable cooling solution?

What if the CPU is much smaller in terms of die area than what XCPU and CELL were at their launch? With a fixed console power envelope, and more of an emphasis on the GPU in terms of the overall system design, and with a heavily customised GPU part, then perhaps it could be possible for us to get a console in 2012 with the highest end (single) GPU of today?

I have wondered myself that too. How much die area woud be saved getting rid in a GTX 580 of the DP logic?.
 
Status
Not open for further replies.
Back
Top