Vita 2 / PS4 Go?

  • Thread starter Deleted member 13524
  • Start date
The current slim consumes 63 watts under load (no disc) according to DF's testing.

I'm assuming 7nm won't quite half power consumption but you'll get possible gains from different memory architecture, and solid state storage. So optimistically, power consumption for the system will be greater than 20 watts.

More than happy for a better/ different argument. A portable PS4 would be great.
 
Why would a portable PS4 be the best option? Surely an almost-PS4 that could have all of its games played on the PS4 would be the better option? It could be specifically designed around the thermal and battery constraints that befit a portable, but developers/publishers would have an install base in the tens of millions at its launch.
 
In addition to the above, Sony would have to keep retailers happy and would therefore need to facilitate a means of physical distribution that would be compatible with the PS4.

The best ways that I can think of would be to either release a card reader or a new DualShock with an integrated card reader. Personally, I prefer the latter, but would that be cost prohibitive?

A DualShock 5, with an integrated card reader and simple screen (black and white OLED) would be quite an elegant solution IMO, and would be suited for launch alongside the PSP2/PSP3/PSVita2.
 
The current slim consumes 63 watts under load (no disc) according to DF's testing.

I'm assuming 7nm won't quite half power consumption but you'll get possible gains from different memory architecture, and solid state storage. So optimistically, power consumption for the system will be greater than 20 watts.

More than happy for a better/ different argument. A portable PS4 would be great.

CH1200's purpose was to design something cheaper on the long run, not more power efficient. The power differences relative to the original CH1000 are relative to the SoC process alone (which was btw the lowest effort I've ever seen in a console shrink, ever?).


For starters, those 63W are probably measured at the wall, so count on a 80% power efficiency from the PSU and you get 50W.

Then it's using 8x 1.5V GDDR5 memory chips (the cheapest possible I think, because 5500MT/s chips could be found on 1.3V variants).

The R9 290's GDDR5 chips had similar electrical specs and that's what they used for the HBM promo materials:

XmFEk7o.jpg


10.66 GB/s per Watt on 1.5V GDDR5. That means we're looking at 176/10.66 = 16.5W from the memory alone.

According to the slide above, 2015's HBM could already achieve this bandwidth using only 5W (probably what a single HBM stack consumes, really).

HBM2 was focused on performance and density, but data rate per pin was doubled while maintaining voltage so I guess it's substantially more power efficient, and HBM3 (and HBM Low-Cost too) will probably improve over this.
In the end, the 8GB with 176GB/s bandwidth will be easily attainable in a single stack at 3W or less (it probably is already with HBM2).

So with ~13W less on the memory, we're down' to ~37W for the rest.

This is of course assuming Wide IO2 won't be a lot better for this, though. Memory density would be a problem nowadays, but in 2018 using 7nm cells it might not be.

Hard drive has a continous power consumption of 3.5W (5V 700mA rating). Changing it to eMMC5.1 or UFS 2.0 would already make that go down to 500mW average or lower. So another 3W less and we're down to 34W.


Then one would assume this portable console would bring a power distribution system that is optimized for low power consumption instead of low-cost, but we'll leave it at that for now.

From those 34W and assuming 0.5W for storage and 3W for memory, we're down to 30.5W for the SoC itself.

Two nodes below 16FF (16/14 -> 10nm -> 7nm) and some effort in the architecture to bring the power down (e.g. modifying the iGPU to use tiled based rendering like Vega and color compression like Polaris, also changing the CPU cores for a single ryzen 2 CCX at 1.6GHz) and it's not hard to imagine this SoC that is currently consuming ~30W into one that consumes 5-6W.
- 35% power reduction per node transition (~58% total reduction bringing it to <13W)
- 5 years worth of GPU architectural optimizations and new CPU cores bringing another 50% reduction in power consumption (13W now 6.5W).
- More efficient power distribution ICs for much lower currents bringing at least another 10% decrease (6.5W -> ~6W)


Max 6W SoC, max 3W system RAM, max 500mW storage, 2W max 1080p display. Total 11.5W power consumption (plus WiFi and bluetooth if/when needed) -> perfectly doable with a small heatsink and a blower fan, just like the Switch in docked mode.
Make it a larger 8/9" tablet instead of a 6" like the Switch and a much larger 7000mAh / 26Wh battery can be put inside, no need for detachable for controller gimmicks as it can just connect regular DS4s through bluetooth. 26Wh battery is enough for ~2.5 hours minimum playing games.




Make it run standard PS4 library games and it's a huge win.
 
Last edited by a moderator:
Those are clearly patent diagrams, so the source is no doubt the Japanese patent.
Yes, but I was trying to see what those 281 and 282 squares were:

cb3mthf.jpg


My shameless google translation refers to 281 as "control circuit such as a CPU" and 282 as a "drawing circuit such as a GPU."

Is it usual nowadays to make a distinction between GPU and CPU in a device that will most likely use a SoC?

Furthermore:

The right operation section 3 and the left operation section 4 have signal output boards 33, 43 which are output boards for outputting operation signals. Further, the main board 28 has a connector 285 to which the flexible printed board FPC of the signal output board 43 is connected, and the sub board 29 has a connector 291 to which the flexible printed board FPC of the signal output board 33 is connected. According to this, it is possible to separate each of the operation portions 3 and 4 from the main substrate 28 and the sub-substrate 29. Accordingly, it is possible to attach the operation section having a shape which the user can easily grasp, and the operation section having at least one of the configuration and arrangement of the input section to the main body section 2. Therefore, versatility of the information processing apparatus 1 can be further improved.

Sure looks like detachable controllers.
 
CH1200's purpose was to design something cheaper on the long run, not more power efficient. The power differences relative to the original CH1000 are relative to the SoC process alone (which was btw the lowest effort I've ever seen in a console shrink, ever?).


For starters, those 63W are probably measured at the wall, so count on a 80% power efficiency from the PSU and you get 50W.

Then it's using 8x 1.5V GDDR5 memory chips (the cheapest possible I think, because 5500MT/s chips could be found on 1.3V variants).

The R9 290's GDDR5 chips had similar electrical specs and that's what they used for the HBM promo materials:

XmFEk7o.jpg


10.66 GB/s per Watt on 1.5V GDDR5. That means we're looking at 176/10.66 = 16.5W from the memory alone.

According to the slide above, 2015's HBM could already achieve this bandwidth using only 5W (probably what a single HBM stack consumes, really).

HBM2 was focused on performance and density, but data rate per pin was doubled while maintaining voltage so I guess it's substantially more power efficient, and HBM3 (and HBM Low-Cost too) will probably improve over this.
In the end, the 8GB with 176GB/s bandwidth will be easily attainable in a single stack at 3W or less (it probably is already with HBM2).

So with ~13W less on the memory, we're down' to ~37W for the rest.

This is of course assuming Wide IO2 won't be a lot better for this, though. Memory density would be a problem nowadays, but in 2018 using 7nm cells it might not be.

Hard drive has a continous power consumption of 3.5W (5V 700mA rating). Changing it to eMMC5.1 or UFS 2.0 would already make that go down to 500mW average or lower. So another 3W less and we're down to 34W.


Then one would assume this portable console would bring a power distribution system that is optimized for low power consumption instead of low-cost, but we'll leave it at that for now.

From those 34W and assuming 0.5W for storage and 3W for memory, we're down to 30.5W for the SoC itself.

Two nodes below 16FF (16/14 -> 10nm -> 7nm) and some effort in the architecture to bring the power down (e.g. modifying the iGPU to use tiled based rendering like Vega and color compression like Polaris, also changing the CPU cores for a single ryzen 2 CCX at 1.6GHz) and it's not hard to imagine this SoC that is currently consuming ~30W into one that consumes 5-6W.
- 35% power reduction per node transition (~58% total reduction bringing it to <13W)
- 5 years worth of GPU architectural optimizations and new CPU cores bringing another 50% reduction in power consumption (13W now 6.5W).
- More efficient power distribution ICs for much lower currents bringing at least another 10% decrease (6.5W -> ~6W)


Max 6W SoC, max 3W system RAM, max 500mW storage, 2W max 1080p display. Total 11.5W power consumption (plus WiFi and bluetooth if/when needed) -> perfectly doable with a small heatsink and a blower fan, just like the Switch in docked mode.
Make it a larger 8/9" tablet instead of a 6" like the Switch and a much larger 7000mAh / 26Wh battery can be put inside, no need for detachable for controller gimmicks as it can just connect regular DS4s through bluetooth. 26Wh battery is enough for ~2.5 hours minimum playing games.




Make it run standard PS4 library games and it's a huge win.

Thanks for breaking it down.

Optimisations aside, isn't a die shrink a 50% power saving though? (If you're not too leaky)
 
Optimisations aside, isn't a die shrink a 50% power saving though? (If you're not too leaky)

It depends on the type of process. As we've seen yesterday in Anandtech's article on Kirin 960, just going from performance-oriented 16FF+ to density-oriented 16FFC can cause a very large difference in power efficiency. And these are both 16 FinFet on TSMC.

On a general scale, TSMC is claiming 40% power reduction at ISO performance between 16FF+ and 10FF, and then <40% between 10FF and 7FF. Samsung is claiming 30% power reduction between 14LPP and 10LPE.
Globalfoundries will skip 10nm but is claiming >60% power reduction between 14LPP and 7DUV. That's the rough equivalent of two 37% reductions (0.63*0.63 = 0.4 = 1-0.6).

In the end, the 35% I used are just a realistic/cautious number considering the claims from the major manufacturers.
 
Is it usual nowadays to make a distinction between GPU and CPU in a device that will most likely use a SoC?
Makes no difference; diagram is only one embodiment.

Sure looks like detachable controllers.
Definitely from the diagram. You can see the connecting ports - it is a patent for a console type thing with detachable controllers.
 
Yesterday SIE issued a press release about the people assigned to their newly established "Location-based Entertainment Business pre-opening office" in their global R&D division. I saw it on some VR news outlet but I doubt they release location-based AR applications on PS4.

https://www.sie.com/corporate/release/2017/170316.html

With the success of Nintendo Switch, AMD may be giving a lucrative deal for their embedded G-series SoC with Jaguar cores.
 
With the success of Nintendo Switch, AMD may be giving a lucrative deal for their embedded G-series SoC with Jaguar cores.

There really isn't anything remotely competitive with Jaguar/Puma cores anymore, at least not off-the-shelf.

Last year, they launched the GX-210JL, which is a 6W TDP SoC built on 28nm with 2 Puma cores at 1GHz and 1 CU (64 sp, 4 TMU, 1 ROP) at 267MHz.
We're looking at Snapdragon 410 performance here, or less.

Their best bet on using off-the-shelf SoCs would be to get Merlin Falcon (Carrizo I think) like the Smach-Z. 2 Excavator modules + 8 CU GCN3, but at the lowest 12W target the clocks are a big question mark. I'd say maybe 1.5GHz CPU and 600MHz GPU.
But these are 2 year-old 28nm chips.
I'm positive Sony would have the wits, will and budget to order a semi-custom 7/10FF SoC with Zen cores if they wanted.
 
Last edited by a moderator:
If Sony did release a PS4 capable gaming tablet, what would they choose to do with docked mode?

Is it possible to totally shut down power to CUs?

If it is, then you could put 36CUs on the SOC and go PS4 pro in docked mode. You'd only be looking at 18w-ish docked at the numbers we're using.

Cost aside, which is a large factor, is that plusable? It really does seem like crazy talk for a 2019/20 product.
 
You don't want PS4 power in the handheld even if disabled because it'll need to carry all the cooling and junk for when running from the wall. IMO Sony should do what I said years ago (unsurprisingly!) - a handheld tablet which can be propped up and connected to a TV to game (camera in the tablet works for camera input when propped up), and a dock that increases power. I think originally I just described something more Switch with little emphasis on the dock.

I suppose the question then is what are the options for docking into an expansion system to get more power?
 
It may draw only 18 Watts in mobile mode, but PS4 Pro can draw 150+ W in game. You need cooling capability for that when docked and running it full pelt.
 
Based on Tottentranz's numbers, which squeezes a vanilla PS4 SoC into 6w on 7nm, I've just scaled it for a Pro at 7nm.

I don't really believe 7nm would get the Pro's SoC 10th of what it's doing at 16nm though, I was just playing with the numbers.
 
I'm a little lost. If you want to discuss a 'PS4 capable' gaming tablet, what wattage are you talking about in 1) mobile mode and 2) docked mode? Such a discussion is going to need to be based in realistic figures for these.
 
Back
Top