PS4 Pro Speculation (PS4K NEO Kaio-Ken-Kutaragi-Kaz Neo-san)

Status
Not open for further replies.
Adding a second GPU of a completely different architecture? That's bonkers. It'd mean two rendering paths and a whole lot of work for devs to target PS4K, and zero benefit for PS4 games running on the system that'd be limited to the basic Liverpool GPU.


I think this is what will happen the PS4 games will be the same but the 2nd GPU will be used for up rendering & re-projecting while new games could use it for other compute tasks & so on. but devs really wouldn't have to do much because it's still the PS4 just with a better offloading processor.

From Sony patent

DYNAMIC CONTEXT SWITCHING BETWEEN ARCHITECTURALLY DISTINCT GRAPHICS PROCESSORS

1. A computer graphics apparatus, comprising: a) a central processing unit (CPU), wherein the CPU is configured to produce graphics input in a format having an architecture-neutral display list for a sequence of frames; b) a memory coupled to the central processing unit; c) first and second graphics processing units (GPU) coupled to the central processing unit, wherein the first GPU is architecturally dissimilar from the second GPU; and d) a just-in-time compiler coupled to the CPU and the first and second GPU configured to translate instructions in the architecture neutral display list into an architecture specific format for an active GPU of the first and second GPU, wherein the just-in-time compiler is configured to perform a context switch between the active GPU and the inactive GPU,wherein the active GPU becomes inactive and the inactive GPU becomes active to process a next frame of the sequence of frames, and turn off the one of the first and second GPU that is inactive after the context switch.
 
Probably the biggest misconception in the history of VR internet topics.
Dual GPU is not better for anything. it's the most inefficient way of computing in recent memory.
Dual GPU is worse for everything, unless you are selling the GPU's

(...)

But the whole "2 eyes so 2 GPU's is super efficient" needs to have a terminator sent back in time to kill that stupid argument's parents. And if that fails, kill the people who wrote it down on the internet

By "stupid" you mean John Carmack, lead programmer of doom, wolfenstein, quake, rage and their idtech engines, pioneer of several widely used rendering techniques and current Chief Technological Officer of none other than Occulus VR?

John Carmack said:
An attractive direction for stereoscopic rendering is to have each GPU on a dual GPU system render one eye, which would deliver maximum performance and minimum latency, at the expense of requiring the application to maintain buffers across two independent rendering contexts.

Okay, got it.
 
Back to my thought of this just being a upgrade to the GNB & not the main GPU. We have STB & so on that will be using PowerVR GPUs so they are keeping them up to date for all the 4K stuff & they will be getting new GPUs so the prices should stay low as Sony keep upgrading to the newer models keeping the PS4 fresh while still being able to play PS4 games because the base GPU is still the same but the PowerVR GPU can be used on the OS level & for some compute tasks.


At 1Ghz a GT7900 PowerVR is 2tflops half precision.

http://www.extremetech.com/gaming/199933-powervr-goes-4k-with-gt7900-for-game-consoles

PowerVR goes 4K with GT7900 for game consoles

PowerVR is announcing its new high-end GPU architecture today, in preparation for both Mobile World Congress and the Game Developers Conference (MWC and GDC, respectively). Imagination Technologies has lost some ground to companies like Qualcomm in recent years, but its cores continue to power devices from Samsung, Intel, MediaTek, and of course, Apple. The new GT7900 is meant to crown the Series 7 product family with a GPU beefy enough for high-end products — including 4K support at 60fps — as well as what Imagination is classifying as “affordable” game consoles.

PowerVR-GT7900

First, some specifics: The Series 7 family is based on the Series 6 “Rogue” GPUs already shipping in a number of devices. But it includes support for hardware fixed-function tessellation (via the Tessellation Co-Processor), a stronger geometry front-end, and an improved Compute Data Master that PowerVR claims can schedule wavefronts much more quickly. OpenGL ES 3.1 + the Android Extension Pack is also supported. The new GT7900 is targeting a 14nm-and-16nm process, and can offer up to 800 GFLOPS in FP32 mode (what we’d typically call single-precision) and up to 1.6TFLOPS in FP16 mode.

One of the more interesting features of the Series 7 family is its support for what PowerVR calls PowerGearing. The PowerVR cores can shut down sections of the design in power-constrained scenarios, in order to ensure only areas of the die that need to be active are actually powered up. The end result should be a GPU that doesn’t throttle nearly as badly as competing solutions.

PowerVR1

On paper, the GT7900 is a beast, with 512 ALU cores and enough horsepower to even challenge the low-end integrated GPU market if the drivers were capable enough. Imagination Technologies has even created an HPC edition of the Series 7 family — its first modest foray into high-end GPU-powered supercomputing. We don’t know much about the chip’s render outputs (ROPs) or its memory support, but the older Series 6 chips had up to 12 ROPS. The GT7900 could sport 32, with presumed support for at least dual-channel LPDDR4.

Quad-channel memory configurations (if they exist) could actually give this chip enough klout to rightly call itself a competitor for last-generation consoles, if it was equipped in a set-top box with a significant thermal envelope. Imagination is also looking to push the boundaries of gaming in other ways — last year the company unveiled an architecture that would incorporate a ray tracing hardware block directly into a GPU core.

The problem with targeting the affordable console market is that every previous attempt to do this has died. From Ouya to Nvidia’s Shield, anyone who attempted to capitalize on the idea of a premium Android gaming market has either withered or been forced to drastically shift focus. Nvidia may have built two successive Shield devices, but the company chose to lead with automotive designs at CES 2015 — its powerful successor to the Tegra K1, the Tegra X1, has only been talked about as a vehicle processor. I suppose Nvidia could still announce a shield update around the X1. But considering the company didn’t even mention it at CES, where Tegra was launched as a premium mobile gaming part, speaks volumes about where Nvidia expects its revenue to come from in this space.

For its part, Imagination Technology anticipates the GT7900 to land in micro-servers, full-size notebooks, and game consoles. It’s an impressive potential resume, but we’ll see if the ecosystem exists to support such lofty goals. If I had to guess, I’d wager this first chip is the proof-of-concept that will demonstrate the company can compete outside its traditional smartphone and tablet markets. Future cores, possibly built with support for Samsung’s nascent Wide I/O standard, will be more likely to succeed.
 
I think this is what will happen the PS4 games will be the same but the 2nd GPU will be used for up rendering & re-projecting
Put in a GPU of the same type and it can be used for native rendering, not upscaling. Without requiring crazy amounts of effort to use.
...while new games could use it for other compute tasks & so on. but devs really wouldn't have to do much because it's still the PS4 just with a better offloading processor.
'Offloading processors' require lots of work to use. There's no such thing as a simple piece of silicon that can take over the work of code designed for something else. Either it needs super special hardwre, or an API to manage it (which PS4 doesn't have).

That's for switching completely between GPUs, not running them in parallel, to support hetreogenous GPU architectures in a portable/laptop. That is, integrated low power graphics of one form for use from battery, extra GPU of another form to be used when plugged in. As a patent it's probably useless because one can just scale back the high-end GPU which would be a far easier solution if not as optimally efficient as a low-power targeting GPU architecture.

Back to my thought of this just being a upgrade to the GNB & not the main GPU.
Your theory about the second GPU graphics northbridge doesn't belong in this thread. Discuss it in the GNB thread.
 
If we're only doubling PGU power every 4 years, then conosle generations as we know them are over regardless if there's a stopgap progression or not. That's mean 4x GPU increase per 6-8 year generation, which isn't a generational advance (typically ~10x). Perhaps it's this slow down of progression that's encouraging (in oart at least) a progressive platform model? Get consumers used to the idea and upgrading as they want. This'd mean a better turnaround of hardware IMO. Games would just need a clear minimum version number.

ignore me time files!
 
By "stupid" you mean John Carmack, lead programmer of doom, wolfenstein, quake, rage and their idtech engines, pioneer of several widely used rendering techniques and current Chief Technological Officer of none other than Occulus VR?

John Carmack said:
An attractive direction for stereoscopic rendering is to have each GPU on a dual GPU system render one eye, which would deliver maximum performance and minimum latency, at the expense of requiring the application to maintain buffers across two independent rendering contexts.

Okay, got it.

I think you might want to read the article again. He's saying that for dual gpu systems having each gpu render one eye is better for latency than AFR or having both GPUs work on the same frame [buffer]. He's not stating that dual gpu is better than one equally fast single gpu.

John Camrack said:
High end, multiple GPU systems today are usually configured for AFR, or Alternate Frame Rendering, where each GPU is allowed to take twice as long to render a single frame, but the overall frame rate is maintained because there are two GPUs producing frames

Alternate Frame Rendering dual GPU:
CPU1:IOSSSSSSS-------|IOSSSSSSS-------|
CPU2: |RRRRRRRRR-------|RRRRRRRRR-------|
GPU1: | GGGGGGGGGGGGGGGGGGGGGGGG--------|
GPU2: | | GGGGGGGGGGGGGGGGGGGGGGG---------|
VID : | | |VVVVVVVVVVVVVVVV|
.................................................... latency 48 – 64 milliseconds

Similarly to the case with CPU workloads, it is possible to have two or more GPUs cooperate on a single frame in a way that delivers more work in a constant amount of time, but it increases complexity and generally delivers a lower total speedup.

An attractive direction for stereoscopic rendering is to have each GPU on a dual GPU system render one eye, which would deliver maximum performance and minimum latency, at the expense of requiring the application to maintain buffers across two independent rendering contexts
 
I think you might want to read the article again. He's saying that for dual gpu systems having each gpu render one eye is better for latency than AFR or having both GPUs work on the same frame [buffer]. He's not stating that dual gpu is better than one equally fast single gpu.

I'm not sure that's ever been the claim though has it? Merely that a lot of benefit can be had from 2 GPU systems in VR. Egmon83 on the other hand was claiming that there was no benefit at all. In theory, the benefits for VR in particular should be quite large.
 
Put in a GPU of the same type and it can be used for native rendering, not upscaling. Without requiring crazy amounts of effort to use.
'Offloading processors' require lots of work to use. There's no such thing as a simple piece of silicon that can take over the work of code designed for something else. Either it needs super special hardwre, or an API to manage it (which PS4 doesn't have).

That's for switching completely between GPUs, not running them in parallel, to support hetreogenous GPU architectures in a portable/laptop. That is, integrated low power graphics of one form for use from battery, extra GPU of another form to be used when plugged in. As a patent it's probably useless because one can just scale back the high-end GPU which would be a far easier solution if not as optimally efficient as a low-power targeting GPU architecture.

Your theory about the second GPU graphics northbridge doesn't belong in this thread. Discuss it in the GNB thread.


How does PS4K speculations not belong in a PS4K speculation thread? My theory is just as good as anyone else's in this thread who don't have the real information yet.

EDIT: OT discussion about a 320 ALU second GPU in PS4 removed. Discussion of Starsha belongs elsewhere.
 
Last edited by a moderator:
I think you might want to read the article again.

I read the article. I also read Carmack's tweets about a single GPU in VR having to deal with a lot of overhead and latency for changing between two different viewpoints for the same frame, which a dual-GPU system would avoid (all they have to do is "wait" for both frames to finish, which is usually not a lot if the GPUs are identical), and I read presentation slides about AMD's LiquidVR and nVidia's GameworksVR.

I also have tested SteamVR myself, with a single 290X getting a mediocre score and two 290X with multigpu enabled getting a perfect score.

They all point to dual-GPUs being excellent for VR, where you can at least achieve twice the complexity of each frame at the same latency.
Then there's all the VR devkits being shipped with dual-GPU solutions.


After all this stuff presented how someone could still think dual-GPU is somehow worse for VR is beyond my comprehension.
 
How does PS4K speculations not belong in a PS4K speculation thread? My theory is just as good as anyone else's in this thread who don't have the real information yet.
Everyone else is discussing the hardware known to be in the PS4. You're discussing a theoretical hardware that is rejected by the mainstream. It's like attending a conference on Darwinian Evolution As Seen In Genetics and talking about the impact of Intelligent Design. Yes, you may be right and we may all be wrong, but we don't want to spend our time debating, to no concensus, whether your hardware theories are right or not. There's another thread for that. When it's proven PS4 has a second GPU in it, then we can talk about this being a target for upgrade. Until then, you're a maverick theorist going against us narrow-minded mainstream. You'll just have to content yourself with being the only guy who really knows what's going on because we're not going to discuss it any more (until you have some incredibly solid evidence such as a signed affidavit from Cerny or a copy of the engineering blueprints).
 
People on Internet are thinking PS4K games will improve framerate from the PS4 regular game. That's cute.

They really think they'll get a PS60fps instead of a PS4K?
 
Surely all it can do for VR is increase framerate?
 
PS4 isn't really suffering from res though. 1080p and nice AA seems reasonably commonplace, no? Whereas 60fps isn't a common target, so it makes sense to want that. UC4 at 60 fps...:yes:
 
More info from the Gaf leaker:

Time from announce to release should be pretty short just long enough for preorders to be taken. There is a store date which is close to OGPS4 release date but before Christmas. HD 500gb (guessing pricing factor) The actual unit is smaller than OG PS4. No SKU for PSVR bundle (yet) also No initial price drop for OG PS4 (probably after the holidays for trade purposes?)

As usual, HUGE grain of salt.

EDIT:

He confirmed a $499 price

No drop on the OG?! Doesn't that imply that this is 499 then?
 
Last edited:
Status
Not open for further replies.
Back
Top