Predict: The Next Generation Console Tech

Status
Not open for further replies.
Think of it this way: If they were thinking of using a APU with 10 compute units, and a GPU with 20, it would be a lot cheaper in terms of design, OS work, fabricating et al to just use a GPU with 30 compute units to start with. Same overall power, no multi-GPU integration headaches. (and fewer transistors, since you wouldn't need to duplicate the common sections)

30 compute units in an apu, is such a thing even possible?
 
30 compute units in an apu, is such a thing even possible?
No clue, it was just an example using arbitrary numbers. If it were done for real, it would probably be two units, the CPU and GPU like current machines. Although for performance reasons, it would be better to integrate them into an APU from the start. When the 360 got merged into a single chip, they actually had to add silicon to simulate the slower behavior of the old copper traces and buses connecting the two chip solution.
 
PS4 launching last doesn't bode well for Sony IMO. The only way they took advantage of it this gen was with Blu-Ray, and I fear they won't have a hardware advantage this time either. Of course they are in a very different position and have a different attitude now. I'd love to be proven wrong... If Sony perhaps use HMC and go for larger chips with the intention of a quick shrink to 20nm in 6-12 months it could be worth launching 6 months after MS. A couple of months would probably net them no performance advantage and have them miss the holiday season - a year would be a bit too long to justify any performance advantage...
 
Probably best not to get your hopes up, it is not particularly cost effective to ship a box with two GPUs.

I totally agree with that, it is a crazy idea to have 2 asymmetric GPUs in a budget/TDP/limited console, the silicon budget would be better spread to create a more powerful feature rich GPU, or a more powerful CPU.
Unless they go the nintendo way and put all their efforts in the GPU side and give us a 1.24 GHZ CPU :LOL:
 
I would claim, they didn't put too much effort on neither side. :rolleyes:

:LOL: relativity everything is relative my friend, compared to their CPU, the GPU in WiiU is revolutionary, a real generational leap with huge risk taken by nintendo :LOL:

I believe Nintendo took the extreme road of relying on GPU for consoles VS the other extreme road once taken by sony with PS3 (relying on CPUs). Maybe the best choice was that of Microsoft with xbox360 : an equilibrium between CPU and GPU power/budget. I do believe though that both ps4 and xboxnext would use this ideology.
 
I believe Nintendo took the extreme road of relying on GPU for consoles VS the other extreme road once taken by sony with PS3 (relying on CPUs). Maybe the best choice was that of Microsoft with xbox360 : an equilibrium between CPU and GPU power/budget. I do believe though that both ps4 and xboxnext would use this ideology.

RSX is a customized version of Nvidia's high-end GPU from 2005. Just because Xenos outperforms it doesn't mean Sony cut corners the way Nintendo did.
 
RSX is a customized version of Nvidia's high-end GPU from 2005. Just because Xenos outperforms it doesn't mean Sony cut corners the way Nintendo did.

of course sony didnt cut corners for its ps3 GPU the way nintendo did with its WiiU CPU, thats obvious, but I am talking here about the philosophy of designing the hardware, for sony it was CPU centric (we know that the first plans were to drop the GPU altogether replacing it with another CELL, the ps3 RSX eventhough was powerful, it was relatively an afterthought to save wasted time), and for nintendo WiiU it was GPU centric (just look at the silicon budget). I consider both philosophies (CPU centric VS GPU centric) as extremes.
 
(we know that the first plans were to drop the GPU altogether replacing it with another CELL, the ps3 RSX eventhough was powerful, it was relatively an afterthought to save wasted time), and for nintendo WiiU it was GPU centric (just look at the silicon budget).

Isn't that a myth? I recall this being repudiated several times on this forum. There was a "reality synthesizer" GPU that was supposedly unconventional, but I doubt we'll ever get specifics on what the GPU was supposed to be before Nvidia got involved.
 
http://www.bloomberg.com/news/2012-...ext-xbox-console-for-2013-holiday-season.html

source says that the 2013 console will not be ps4 but nextbox
if true this probably subvert speculation about ddr4 and other exotic memory configurations

I have a spotless record on console release dates (lol), and I'm almost certain you will see PS4 in 2013 as well.

Also, DDR4 should be doable in late 13. The other exotic memory technologies are not going to happen anyway in either imo.
 
I have a spotless record on console release dates (lol), and I'm almost certain you will see PS4 in 2013 as well.

Also, DDR4 should be doable in late 13. The other exotic memory technologies are not going to happen anyway in either imo.

I totally agree, there are no tangible technological benefits and only commercial drawbacks in releasing the ps4 months after the xboxnext expected release date in fall 2013.

In terms of costs, I dont think it does matter that much if sony and microsoft use a low speed inexpensive chipsets of GDDR5 or to be newly released but highest performance DDR4 chipsets. It will depend on the availability of the highest performance DDR4 chipsets in 2013, but in terms of bandwidth and quantity of RAM it wont make that huge difference. after all we are talking here about 4 Gb of RAM. (I dont believe either the 8 Gb scenario judging by console hardware throughout history).
 
I totally agree, there are no tangible technological benefits and only commercial drawbacks in releasing the ps4 months after the xboxnext expected release date in fall 2013.

In terms of costs, I dont think it does matter that much if sony and microsoft use a low speed inexpensive chipsets of GDDR5 or to be newly released but highest performance DDR4 chipsets. It will depend on the availability of the highest performance DDR4 chipsets in 2013, but in terms of bandwidth and quantity of RAM it wont make that huge difference. after all we are talking here about 4 Gb of RAM. (I dont believe either the 8 Gb scenario judging by console hardware throughout history).

Microsoft jumped from 64MB to 512MB in 4 years, now we are talking about 8 years since 360 release. And dev kit has 12GB ram.
 
Gddr5 would always be a lot more expensive than ddr4 over the lifetime of the console. One is a commodity product that may be a little expensive to start with but will drop like a stone once volume production starts. The other is a total niche product which has and always will be expensive.
 
I see 8gigs as totally sane if its just DDR 4 but anything more exotic and it looks less likely. With HMC maybe its doable and even necessary due to minimum dies to achieve the stack required.
 
It doesn't have to be a HMC though as I'm quite sure that stacking through TSVs will certainly not be ready by the time the next-gen consoles ship. It'll be 2.5D with an interposer.
 
Microsoft jumped from 64MB to 512MB in 4 years, now we are talking about 8 years since 360 release. And dev kit has 12GB ram.

it jumped from 64Mb to 512Mb when PCs were using 2 Gb as standard, nowadays the standard for PCs is 8Gb, so I doubt consoles would use (or need) the same amount....add to this the fact that generally speaking consoles use high bandwidth RAM to compensate for the lack of quantity....But I want to be positively surprised ;)
 
But we are also moving towards longer life cycles so chucking in a lot of dirt cheap commodity team is an easy and relatively cheap way of adding some longevity to your platform. That's the way I see it anyway.
 
It doesn't have to be a HMC though as I'm quite sure that stacking through TSVs will certainly not be ready by the time the next-gen consoles ship. It'll be 2.5D with an interposer.
If you put one or two stacks of DRAM dies on an interposer (like the AMD/Amkor prototype we have seen), how do you think the dies are connected and the interfacing to the PCB is done? For sure they will use TSVs in that case too (the mentioned prototypes did). Wire bonding does not appear viable for the relatively large dies involved and the fast communication between them one wants to have.
 
Status
Not open for further replies.
Back
Top