I find that implausible. Wheres the mention of Naomi 3?Oh good grief, even GAF stoops to a new low. That spec sheet is courtesy of Texan as he was known on B3D.http://s3gal3aks.wordpress.com/2011/11/16/world-exclusive/
It's doable, but would it be useable? It's way more than needed for two 1080p buffers. It'd have to take on a new role as working space like PS2, which would be a step out of the ordinary. eDRAM in XB360 was there to enable a cheaper, slower main RAM bus. If they had 256 GB/s, eDRAM would be redundant unless they've decided RAM access is the bottleneck for their future renderers (while everyone else is looking at shader calcs as the bottlenecks).Actually, the 100 mb of eDRAM is not a craxy number.
Essentially Rambus has lost tons of monies due to the antitrust case. Rambus was responsible for some of the HW found in PS3. My question is: what's the impact of Rambus valuation on PS4 HW plans? Does this make them "desperate" in need of a big deal and they're willing to sell inexpensive but powerful goods (obviously not below the cost of manufacturing - that'd kill them) or does this mean Sony (or MS) should not team with them for tech since it's dangerous? Anyone?
3D?It's way more than needed for two 1080p buffers.
Rambus does not manufacture anything. They are a pure IP shop. If you want rambus memory, you need to get a license from rambus, and then buy the chips from someone who is willing to make them for you.
I reckon a Quad core CPU with SMT and performance roughly to a 2600K.
2-4Gb of total system memory
GPU roughly GTX570/6950 but die shrunk
When PS3 and 360 released they had GPU's that were level with the very highest end PC's but I can't see that happening next time round, if it did you would be looking at GPU power equal to or greater then a 6990/GTX590
One could also claim that highest-end at that time was SLI setup.When PS3 launched it did not have the highest-end GPU, Nvidia had launched the G80/8800, while RSX was a cut down G70.
When PS3 launched it did not have the highest-end GPU, Nvidia had launched the G80/8800, while RSX was a cut down G70.
8800GTX was released a few weeks after iirc
And still my point remains, PS3 had a GPU that was much faster then 90%+ of the worlds PC user base and had more grunt then all but the highest end of gaming PC's.
Can you see the same thing happening next gen? Because I can't.
8800GTX was released a few weeks after iirc
And still my point remains, PS3 had a GPU that was much faster then 90%+ of the worlds PC user base and had more grunt then all but the highest end of gaming PC's.
Can you see the same thing happening next gen? Because I can't.
Easily. The vast majority of GPUs sold today are weak integrated ones.
Cheers
I reckon a Quad core CPU with SMT and performance roughly to a 2600K.
2-4Gb of total system memory
GPU roughly GTX570/6950 but die shrunk
When PS3 and 360 released they had GPU's that were level with the very highest end PC's but I can't see that happening next time round, if it did you would be looking at GPU power equal to or greater then a 6990/GTX590
According to Wiki/google 8800GTX was release Nov 6 and PS3 Nov 17.
Even before then I'm pretty sure they had G70's (?) clocked up to 650 mhz.
I think you'd have a better argument with Xenos in 2005, it seems on par with RSX, which is on par with 7800GTX, which was on par with X1800XT, the fastest PC cards of late 2005. Except for maybe a slightly lower clock.
But I agree with the general thrust of your argument, last time at least with Xenos we had something near highest end. I think this time it's pretty obvious we're going to get something from the mid-high ranks rather than the high-high, unless I'm surprised. The highest end PC GPU's seem to be much bigger and hotter today than they where then.
According to Wiki/google 8800GTX was release Nov 6 and PS3 Nov 17.
Even before then I'm pretty sure they had G70's (?) clocked up to 650 mhz.
I think you'd have a better argument with Xenos in 2005, it seems on par with RSX, which is on par with 7800GTX, which was on par with X1800XT, the fastest PC cards of late 2005. Except for maybe a slightly lower clock.
But I agree with the general thrust of your argument, last time at least with Xenos we had something near highest end. I think this time it's pretty obvious we're going to get something from the mid-high ranks rather than the high-high, unless I'm surprised. The highest end PC GPU's seem to be much bigger and hotter today than they where then.
Exactly, with die shrinks and taking power and heat into things even using an AMD 6950 would be out of reach.
Can see a 6850/6870 being much more realistic. There's 6850 out that don't even need a 6 pin PCIEX power connector as they they draw all there power from the PCIEX slot.
Perfect card for next generation as its power efficient and a good 4-5x more powerful then RSX?
If they take the highest end GPU of today and get rid of all the double-precision and other such do-dats that aren't necessary for a console GPU, my question will be how big and how hot would such a customised GPU part be? Enough to fit in a console box with a reasonable cooling solution?
Exactly, with die shrinks and taking power and heat into things even using an AMD 6950 would be out of reach.
Can see a 6850/6870 being much more realistic. There's 6850 out that don't even need a 6 pin PCIEX power connector as they they draw all there power from the PCIEX slot.
Perfect card for next generation as its power efficient and a good 4-5x more powerful then RSX?
If they take the highest end GPU of today and get rid of all the double-precision and other such do-dats that aren't necessary for a console GPU, my question will be how big and how hot would such a customised GPU part be? Enough to fit in a console box with a reasonable cooling solution?
What if the CPU is much smaller in terms of die area than what XCPU and CELL were at their launch? With a fixed console power envelope, and more of an emphasis on the GPU in terms of the overall system design, and with a heavily customised GPU part, then perhaps it could be possible for us to get a console in 2012 with the highest end (single) GPU of today?