Design a console with better specs than PS4 and Xbone, constraints apply

I would have gone with Xbone design except with 16GB DDR3 (via density doubling), and upclock to 1040 mhz GPU, or if not possible to clock higher, added 2-4 CU's.

Pretty simple. Fits with whats already done. I like the Xbone design.

Maybe switched the 8 core jag out with a 4 core Piledriver at 3.5 ghz or so. But I expect this would have been too expensive.

Maybe the RAM would be overkill but I like it. Cant have too much RAM imo, especially as the years drag on. Can reduce loading too.
 
I would have gone with Xbone design except with 16GB DDR3 (via density doubling), and upclock to 1040 mhz GPU, or if not possible to clock higher, added 2-4 CU's.
I don't understand these designs that take the existing designs and +1 them. If the target is a BOM and wattage, are we saying the current designs low-balled and Sony/MS could have pushed a little harder?

In fact that's the premise of the whole thread. MS and Sony have put together boxes with such-and-such specs at their price and power draw. Now we put in even more hardware for the same price. I'm not seeing the logic, other than to change business strategy and sell at a loss, unless we consider MS and Sony pretty incompetent and they could have had better specs for the same money (or the same specs for less money) had they picked a different architecture.
 
nice contest, I like it:

Process/technology:
1.gen: GPU: 28nm TSMC / CPU: 32nm SOI GF
2.gen: 28nm FD-SOI [introduction: end of 2014 or mid of 2015]
3.gen: 16nm or 14XM + FD-SOI [introduction: open]
__
SoC:
1.gen: no SoC but CPU and GPU on one Interposer (CPU ~90mm² + GPU/northbridge: 300mm²)
2.gen and 3.gen: SoC / APU (2.gen: APU with ~350mm²)

CPU:
8 x Jaguar @ 3,2GHz with 4MB fullspeed L2-Cache + 2 x 40GB/sec bi-directional connection to the GPU/northbridge (like HT3.0 but with 64bit)

GPU:
24 CU (GCN1.1 or GCN2.0) @ 1200MHz
32 ROP

Northbridge (within the GPU during 1.gen)
Unified GPU-L2/scratchpad with 8MB (not sure if this is possible with GCN)
384bit memory interface (actually it is 3 x 128bit)

Memory:
Unified 6GB of 1200MHz GDDR5 (restriction to 1200MHz saves die-space..see pitcairn <-> Tahiti)
230GB/sec

Blu-ray:
4x Blu-Ray player with 3 pickups with one TrueX-25 Laser per pickup
3 pickups to reduce the latency
TrueX-25 for a very high bandwidth ( 4 x3 x25 = 18MB/sec x 3 pickups x 25 = 1,3GB/sec :) )
=> games load only from Blu-Ray (within 4-5sec)
TrueX was developed from Zen Research, only TrueX5 and TrueX7 were marketed but according to an old article TrueX25 was in development)

Storage:
32GB Flash for the OS
4 SDXC slots on the console for user-content/storage or 1 SDXC slot on the controller
2x USB3 ( for external HDDs)

Consumption:
1.gen: 250W
2.gen: 200W
3.gen: 150W

Cooling:
1.gen: nearly noiseless water-cooling with a large fan like Xbone
2.gen: open

Performance:
~2 x PS4
 
The only way I see to have something fundamentally different than what Sony and MSFT did would be to revive Larrabee concept. Would both risky and insanely expensive though extremely geek friendly.
 
APU containing:
CPU: 1 Piledriver module (2 cores) @ 3.2Ghz + 2 Jaguar modules (8 cores) @ 1.6Ghz
GPU: GCN 24 CUs (1536 SPs), 96 TMUs, 32 ROPs, 2 Geometry engines @ 800Mhz

(MCH responsible for misc. IO would be integrated into APU with the first die shrink.)

Memory:
256bit interface from APU to 6GB GDDR5 @ 6Ghz (192GB/s)


or


APU containing:
CPU: 8 core Intel 'Avoton' Atom @ 2.4Ghz
GPU: Intel 'Haswell IGP' 120 EUs @ 1.2Ghz

On-package 2 x 128MB DRAM chips each connected to APU by a 512bit bus.

(MCH responsible for misc. IO would be integrated into APU with the first die shrink.)

Memory:
256bit interface from APU to 8GB DDR3 @ 2.133Ghz (68GB/s)

Otherwise same as the PS4.
 
Last edited by a moderator:
I wonder why SONY and MS would shed out that much money for their respective R&D teams if coming up with better solutions was just as simple and easy as this thread implies.
 
The question is if these are mostly COTS products from existing amd designs, why was there any major R&D at all?

In XB1 the engineering went into the ESRAM and SHAPE block.
 
Scatch my second suggestion of 120 Haswell IGP EUs, with a die of 174mm^2 on 22nm for just 40 EUs such a configuration would be unmanufacturable (even 80EUs + 8 next-gen atom cores would be pushing the boundary).
 
I wonder why SONY and MS would shed out that much money for their respective R&D teams if coming up with better solutions was just as simple and easy as this thread implies.

Both Microsoft and Sony have been in this game for 12 and 19 years respectively, while you could easily say that a business requires investments before seeing returns, both companies have been in this market long enough that losing tons of money on a console isn't viable anymore.
 
I don't understand these designs that take the existing designs and +1 them. If the target is a BOM and wattage, are we saying the current designs low-balled and Sony/MS could have pushed a little harder?
There is a hindsight multiplier.
Specs can be pared back, certain techs skipped over, and things like redundancy and reserved resources expanded for the sake of risk management and time to market.
In addition, we have no solid numbers for the costs involved in R&D, and how easy it is to say "use this tech" without knowing if it would have lead to the development effort's costs bloating or forcing a delay waiting for more budget outlays.

This thread has the luxury of not having to make some of these calls two years ago, and it isn't on the hook for millions of dollars if it had predicted wrong.

If, for example, there were a delay in the roll out of acceptable 4Gbit GDDR5 chips, Sony might have not pushed past 4 GB. Eventually, there would have been an announcement of the new chips, and this thread would have a bunch of posts giving the PS4 double the memory.

Easy.
 
Both Microsoft and Sony have been in this game for 12 and 19 years respectively, while you could easily say that a business requires investments before seeing returns, both companies have been in this market long enough that losing tons of money on a console isn't viable anymore.
Yup, Microsoft knew perfectly well that they'd lose a shitload of money on this for the first generation and some of the second.

Microsoft have a gross revenue of some "spare" billions a year, maybe they can afford certain ventures, but a 3rd time after the RRoD and so on, I don't see it.
 
There is a hindsight multiplier.

Definitely. Two years ago, I was expecting PPC based XB1 and PS4.

I think the XB1 and PS4 are fairly optimal solutions. It is clear MS and Sony have different constraints, with Sony allowing higher power consumption.

Cheers
 
Definitely. Two years ago, I was expecting PPC based XB1 and PS4.

I think the XB1 and PS4 are fairly optimal solutions. It is clear MS and Sony have different constraints, with Sony allowing higher power consumption.

Cheers
I can't help thinking that Microsoft's vision seem to require 8GB for it's "everything accessible instantly, at the same time", so they probably decided very early on that 4GB wouldn't be enough, and that 8GB GDDR5 was an extremely high risk in terms of availability. (how to split? 2GB OS and 2GB games?)

On the other side, Sony was planning to go "gamers first, anything else is nice to have". So 4GB would have been fine for them, the 8GB was a nice to have which panned out at the last minute.

So now I'm curious what would have happened if 4Gbit GDDR5 availability would have been a sure shot, known a long time in advance, instead of the close call it seems to be. Maybe Microsoft would have used GDDR5?
 
and that 8GB GDDR5 was an extremely high risk in terms of availability. (how to split? 2GB OS and 2GB games?)

And high cost and high power consumption.

I think MS really wanted DDR4 to be in XB1, but couldn't count on an early mass market uptake

On the other side, Sony was planning to go "gamers first, anything else is nice to have". So 4GB would have been fine for them, the 8GB was a nice to have which panned out at the last minute.

I still see Sony's move as a knee jerk reaction, they were forced to double down on RAM and it's going to add cost for the duration of this generation.

What if AMD and Nvidia shifts all their GPUs to 256MB fast DRAM on interposer (or direct die stacking) and DDR4-4000 in two years time ? Then Sony will be the sole buyer of GDDR5 for the following 6-8 years; boutique memory at boutique prices.

So now I'm curious what would have happened if 4Gbit GDDR5 availability would have been a sure shot, known a long time in advance, instead of the close call it seems to be. Maybe Microsoft would have used GDDR5?
I'm pretty sure MS would have gone with large amounts of on-die RAM regardless and mass market memory.

Cheers
 
surferibm05021512.jpg


IBM Cpu and AMD Northbridge/Gpu

- 4 Power7.5 cpu with smt and enhanced random features
- a sea of shared edram because ibm is good at it
- an unholy number of rop in the edram
- some video encoding/decoding moved to cpu because I can
- one core reserved to system
- developer can choose 4 game cpu profiles, (all at 2,5GHz, two at 2,8GHz the other at 2,2GHz, one at 3,2GHz the others at 2GHz)
- hypertransport to connect to gpu
- 4GB GDDR5 @ 7GHz @ 256bit
- 22 CU GCN2 with 2 more CU for redundancy @850MHz a little less of epic 2,5TF < 300mm^2
- 4 ace 4 primitive engine
- integrated video scaler
- digital dac, yeah, digital to digital that is analog (nvidia did something like this for radio)
- about 250W total
- during media playback you can almost rely on the cpu and shutdown the gpu to reduce noise
- the 3rd version has cpu and gpu on the same substrate
- the controller is almost the 360, but with two state triggers like in the gamecube, and bigger front button with capacitive surface so that you can use them as a trackpad, gyro, with kinect2 it can be used like a wiimote
 
Last edited by a moderator:
I still see Sony's move as a knee jerk reaction, they were forced to double down on RAM and it's going to add cost for the duration of this generation.

What if AMD and Nvidia shifts all their GPUs to 256MB fast DRAM on interposer (or direct die stacking) and DDR4-4000 in two years time ? Then Sony will be the sole buyer of GDDR5 for the following 6-8 years; boutique memory at boutique prices.
So it's not an optimal solution for the long term, or can they get lucky? Or have secured a contract for this eventuality?
 
As stated already it's pretty difficult to improve on what Sony and MS haver already put together given the exact same constraints (for obvious reasons). So I guess the game is to change those constraints and see what you can put together within the new limits.

I'm going to assume a 250w TDP which I think should be doable in an Xbone sized case reasonably quietly.

CPU: i7 4770T (45w)
GPU: GeForce GTX 670 (<170w using the latest silicon ala GTX 770)
RAM: unified 8GB GDDR5 @ 6Ghz

The CPU is a quad Haswell running at 2.5Ghz sporting 320 GFLOPs of AVX2 throughput plus another 307 GFLOPs of very capable compute performance in the on die HD4600. That's over 600 GFLOPs that could be dedicated to "GPGPU" type tasks which the current consoles are expected to run on the GPU.

The GPU which would then be dedictated to graphics work sports performance very similar to a vanilla 7970 which as we know sports 3.7 TFLOPs of GCN style shader performance (2x PS4).

It this design is overstretching on the power limits a bit then clocks could be dropped to say 2Ghz on the CPU, 800Mhz on the GPU and 5500Mhz on the memory shaving off an easy 30-50w and still maintaining performance well in excess of the current console deisgns.
 
I still see Sony's move as a knee jerk reaction, they were forced to double down on RAM and it's going to add cost for the duration of this generation.

What if AMD and Nvidia shifts all their GPUs to 256MB fast DRAM on interposer (or direct die stacking) and DDR4-4000 in two years time ? Then Sony will be the sole buyer of GDDR5 for the following 6-8 years; boutique memory at boutique prices.
Perhaps not the best place to ask, but why is GDDR5 so expensive/hard to produce? I'm under the impression the chips are based on DDR3, so I don't understand why they're such a rare commodity. Binning, perhaps?
 
Unless you go to custom chips it is very hard to do better than PS4 IMO, and if you go to custom chips you R&D is way higher, may not payoff anymore and there goes easy of development.


The only thing that I think that could be improved is dedicated hardware for PS4Eye and micro, like MS is doing, dont know if anything else could benefit of such solution. Maybe a even better than the rumured audio DSP specs, as I really like good audio.


Besides that, just more of everything, but I quite honestly think it is already very good and prefer to keep power, noise and price down!
 
Back
Top