Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
If you're going to take apart the PS4 Slim, you're probably much better off taking the PSU and the AC converter out, and use DC converters instead.
Otherwise you'll be wasting some 15% of your battery pack converting from DC to AC and then another 15% at the PSU from AC to DC.

According to ifixit's teardown, the PSU only has two DC voltage outputs, 4.8V and 12V. There's a chance that your power pack has one of these voltages as configurable output (or even both, which would save you a ton of hassle).
Beware that 200W is quite a bit below the Pro PSU's capability, though.

Best of luck to your project!

So it's already well suited to semi portable use i.e. cars, caravans, boats?
 
So it's already well suited to semi portable use i.e. cars, caravans, boats?

I guess so.
The 12V is standard for car batteries (and I guess other vehicles too), and assuming the PS4 can simply take 5V on the 4.8V rail then all you need is a car USB charger adapter capable of 1.5A.
 
drivers should only really affect the CPU.
The APIs on console are pretty low level as it is, so it's up to the developers to push performance.

as for nvidia vs AMD. At best, without offending anyone, we can say that the bottlenecks are not in the same areas.
But console developers can code around bottlenecks (really designing the whole game and engine around a specific profile), maximizing the GPU and therefore getting close to it's theoretical maxima.

The idea of NV FLOPS vs AMD FLOPs isn't that their FLOPS are any different, FLOP is a unit of measurement, but people tend to describe a whole cards performance around it's FLOP number, with the expectation that the entire card has it's entire pipeline built around that compute number, but that isn't really reflective of reality. If AMD cards are bottlenecking elsewhere due to the way the game is designed, then it can never leverage it's full resources.

But that shouldn't be as big of a crux of an argument on console.
Good read, think I understand the difference much better now.
 
and assuming the PS4 can simply take 5V on the 4.8V rail
One has to wonder, why 4.8V, though. What possible use would that voltage have? USB power is rated at 5V (without getting into any of the exotic high-wattage power delivery modes), and no modern electronics use TTL-like signalling levels.

If I was to speculate, the 4.8V is standby power rail/for the ARM processor, and USB 5V is tapped from 12V rail via DC-DC converters, but in that case why not save some money and make USB and standby power use same voltage...? Seems weird choice. *shrug* Engineers... Who can understand them, eh? :p
 
One has to wonder, why 4.8V, though. What possible use would that voltage have? USB power is rated at 5V (without getting into any of the exotic high-wattage power delivery modes), and no modern electronics use TTL-like signalling levels.

If I was to speculate, the 4.8V is standby power rail/for the ARM processor, and USB 5V is tapped from 12V rail via DC-DC converters, but in that case why not save some money and make USB and standby power use same voltage...? Seems weird choice. *shrug* Engineers... Who can understand them, eh? :p
It might be a power delivery issue if receiving power over USB or another connector. A 200mV drop wouldn't be unreasonable for high current over a 5V USB cable and/or protection circuits. They just left themselves some breathing room. That wouldn't account for the weird high wattage delivery methods either. I'd agree it seems strange, but they could be looking ahead towards alternative power delivery modes that might be applicable to VR or camera systems. Processor voltage will be a bit irrelevant as there will always be additional circuitry stepping down to ~1V. Just a matter of sufficient power delivery on whichever rail they decide to use. I could also see more DC-DC or even rough rectified AC delivery over the cables. Most of the PD regulators I've seen lately were spec'd for 30V, so dumping rectified AC through them may not be unreasonable to reduce component cost. The 100W mode was also 20V@5A.
 
One has to wonder, why 4.8V, though. What possible use would that voltage have? USB power is rated at 5V (without getting into any of the exotic high-wattage power delivery modes), and no modern electronics use TTL-like signalling levels.

Has anyone ever measured the actual voltage output from the USB ports?
Perhaps it's 4.8V and most battery-powered devices will still accept it since it's just a 4% drop and most batteries are 3.7V anyways.

Though since the 4.8V rail is only rated at 1.5A - 7.2W, it isn't even enough to power 2 USB sources, let alone the 3 ports that are present in the Pro.

And the 12V rail is rated at a whopping 23.5A, meaning it can take 282W total which is almost 40x more power.


A little digging tells me the Pro consumes around 10W at the wall when suspended. That would mean a 70% efficiency for the 4.8V rail if it's being used solo at that time.

I'm inclined to guess the 4.8V output is used exclusively for the suspend mode, just to keep that ARM + southbridge + DDR3 + WiFi/Ethernet + Bluetooth (+ eventual USB Dual Shock 4 charging?) alive.
That way they don't have to power up the 12V DC rectifier during suspend mode, which would probably be even less efficient at those power levels.

Once the console is powered up, the 4.8V output is turned off and instead they use only the 12V rail and DC-DC converters in the motherboard for the 4.8-5V devices. That way they can send a lot more current to the USB ports, to power e.g. external hard drives and PSVR.
 
Yeah it makes sense to have separate standby circuit for efficiency. It's also necessary to have a separate always-on power source to turn the main switcher on/off electronically. None of this matters when battery powered.
 
Interesting read about ARM cpus entering the server space. THe part that really stounds out is the power consumption. Desktop PCs you don't really care, but on a console? I really feel like arm might end up being the smart way to go, if not for backwards compatibility.

Here, this picture tweeted by CloudFare CEO really drives the point home (watts Intel vs Qualcomm server, same workload/same performance)--


Here is another major ARM server play: ARM-Softbank/Fujitsu (Fujitsu building world's most powerful exascale supercomputer, ARM-based with custom ARM super core with massively expanded vector performance)--


I have made the suggestion that Sony could switch to ARM-Softbank because of Japanese national tie, well, here you go, Fujitsu did exactly such a move abandoning SPARC and x86 for ARM.
 
Last edited:
I have made the suggestion that Sony could switch to ARM-Softbank because of Japanese national tie, well, here you go, Fujitsu did exactly such a move abandoning SPARC and x86 for ARM.
CPU is only one half of a console. What GPU would you match up that ARM chip with that is at least an equal performance and capability-wise with AMD's offerings?
 
CPU is only one half of a console. What GPU would you match up that ARM chip with that is at least an equal performance and capability-wise with AMD's offerings?

AMD/NVIDIA. In the case of AMD, the big core would be specifically K12 and the other cores would e customized, stock ARM optimized for gaming. Describing an ARM-based AMD APU (w/ Navi, or next-gen AMD graphics). NVIDIA would just be a Tegra-based, console-worthy SoC. The foundation for both options are simply already there.


One more from Linaro conference 2 days ago..Guess who else has partnered with ARM and Fujitsu for exascale ARM?--


Microsoft, and this is a major commitment and not an experiment (watch the YouTube and learn why datacenter is going to necessitate an ARM supplier ecosystem to avoid single supplier (x86/Intel) dependency) If Microsoft is spending billions to create an ARM ecosystem in servers where Intel has a total monopoly and decades of software advantage, then it is absolutely nothing to change a console CPU from x86 to ARM. Microsoft in its brilliance has already laid the foundation with UWP, having no necessity for hardware-based backwards compatibility. Is Windows 10 on ARM happening right now, Hello?

Recent hints.. Natella reminding analysts that Microsoft has the capability to lead through silicon innovation, AND that it's server and XBOX silicon capability are one--https://www.geekwire.com/2018/micro...nds-everyone-company-house-silicon-expertise/ (ARM is specifically mentioned in the article).


ARM console is certainly a reality. The formula for an ARM console CPU is simply-- ARM custom big core (for single-threaded performance) + ARM customized medium cores (A75 class for multithreading)+ARM little cores (A55 class for OS/standby).

Bonus: ARM DynamiQ allows a true 8 core cluster avoiding the CCX (4+4) latency of Ryzen, and the core count will likely be expanded again for 7nm. Secondly, DynamiQ allows independently variable clock rates per core. Imagine a console CPU that allows the heavy-lifter big-cores to boost to a 3.5Ghz peak when necessary while throttling back the multithreading and OS cores momentarily to maintain TDP.

arm-dynamiq-2.jpg


Truly a state of the art console CPU solution. ARM is simply the best next-gen console CPU solution that ticks all the boxes.
 
I always saw ARM as an interesting possibility in a home console, but the question is, what do developers want? I'm going to postulate they'd want to stick with x86, but get the big cores (Zen *cough*cough*) that the current consoles lack. I'm not that up to date on the latest and greatest in ARM, but can it/has it scaled to x86 size and frequency while maintaining the kind of hallmark efficiency it's historically known for? Only ARM processors I can think of coming close to that is Apple's own custom SoCs.

Here, this picture tweeted by CloudFare CEO really drives the point home (watts Intel vs Qualcomm server, same workload/same performance)--



The tweet posted earlier doesn't mean much to me without the power draw being plotted over a graph and over a decent period of time. It doesn't scream objective, and we don't know the exact workload either, so it's hard for me to take it seriously.
 
Here, this picture tweeted by CloudFare CEO really drives the point home (watts Intel vs Qualcomm server, same workload/same performance)--
Which Xeon from which architecture from which tier?

Within the same product family and same socket, there are Xeon CPUs with very distinct power efficiencies.
Moreover, Xeon offerings are usually 1 or 2 years late in the architecture. We're just now getting Skylake Xeons in some tiers.

That one picture says nothing.

AMD/NVIDIA. In the case of AMD, the big core would be specifically K12 and the other cores would e customized, stock ARM optimized for gaming. Describing an ARM-based AMD APU (w/ Navi, or next-gen AMD graphics). NVIDIA would just be a Tegra-based, console-worthy SoC. The foundation for both options are simply already there.

There's no IF work made on ARM CPU cores at AMD, AFAIK.
 
I think if anyone would jump to ARM, Microsoft would be first due to their existing investments on the PC side already with W10. I think they have the low level software expertise to pull this off as well.

I could see MS going with an NVIDIA+ARM solution.
 
Which Xeon from which architecture from which tier?

Within the same product family and same socket, there are Xeon CPUs with very distinct power efficiencies.
Moreover, Xeon offerings are usually 1 or 2 years late in the architecture. We're just now getting Skylake Xeons in some tiers.

That one picture says nothing.



There's no IF work made on ARM CPU cores at AMD, AFAIK.


This is the comparison of the ARM and Xeon CPUs Cloudfare is looking at. https://blog.cloudflare.com/arm-takes-wing/
 
This is the comparison of the ARM and Xeon CPUs Cloudfare is looking at. https://blog.cloudflare.com/arm-takes-wing/
This really shows that simpler CPUs with less single threaded performance are more efficient for extremely parallel work such as web servers. I doubt it would run a game nearly as well as each of the ARM CPU cores have only about 60-80% of the IPC the Skylake has. I doubt game developers would want 20 ARM64 cores over 8 high performance x86 cores even if the ARM cores are almost 2x as energy efficient as the x86 when doing purely parallel work.

It does make me wonder what the performance of a compute GPU would be compared to a CPU at doing webserver work. I imagine there being bottlenecks somewhere because GPUs are amazing at doing cryptography, as demonstrated by cryptocurrency mining, which is a lot of what cloud flare tested.
 
Last edited:
Twice the cores, half the per-core performance, will means higher efficiency. And it's an engineering sample using a newer process node. And they compiled without intel-specific optimizations. And no big compute tasks using AVX in benchmarks, which would paint a very different picture.

In a console workload, many small ARM cores would be a massive inconvenience for questionable gains.

How about many low clocked x86... Would it be a similar inconvenience and questionable gains?
 
Here, this picture tweeted by CloudFare CEO really drives the point home (watts Intel vs Qualcomm server, same workload/same performance)--
The telling thing is there are no Epycs in the comparison, which have been beating Intel significantly on perf/watt with all the cores. That's ignoring the 14vs10nm process difference Mize alluded too. These numbers wouldn't seem a good source for comparing X86 to Arm for any real conclusion. More comparing an upcoming product to a competitors existing one. Interesting nonetheless.
 
I think if anyone would jump to ARM, Microsoft would be first due to their existing investments on the PC side already with W10. I think they have the low level software expertise to pull this off as well.

I could see MS going with an NVIDIA+ARM solution.
Releasing a mainly multiplat console without (PC dominant) X86 CPU ? Unlikely.
 
Status
Not open for further replies.
Back
Top