PS4 Pro Speculation (PS4K NEO Kaio-Ken-Kutaragi-Kaz Neo-san)

Status
Not open for further replies.

Shure... But unless we are talking dual GPUs here, how come Sony is very, very, very concerned with them, and Microsoft is not?

If the Xbox Scorpio is Polaris, and uses the same APU with 2304 Cores (The 2560 one should be expensive, and yields even more troublesome. Besides we never heard of it beeing present in an APU), it must run at 1300 Mhz for 6 Tflops.

Someone notify the police. :runaway:

No good... their phone number is in cahoots with this crazy theory :rolleyes:
 
Shure... But unless we are talking dual GPUs here, how come Sony is very, very, very concerned with them, and Microsoft is not?

If the Xbox Scorpio is Polaris, and uses the same APU with 2304 Cores (The 2560 one should be expensive, and yields even more troublesome. Besides we never heard of it beeing present in an APU), it must run at 1300 Mhz for 6 Tflops.



No good... their phone number is in cahoots with this crazy theory :rolleyes:

Is that the math? what tflops are we looking at if it's running at 1100Mhz? How many cores would need to be added to get to 6tflops? 2560 @ 1100Mhz?
 
1300 Mhz*2304 shader processor*2 instructions per cycle = 5 990 400 Mflops = 5.99 Tflops
2560 sp at 1100 Mhz would be 5.6 Tflops!

PS4 - 800*1152*2=1,84 Tflops
One - 853*768*2=1,31 Tflops
 
Shure... But unless we are talking dual GPUs here, how come Sony is very, very, very concerned with them, and Microsoft is not?

If the Xbox Scorpio is Polaris, and uses the same APU with 2304 Cores (The 2560 one should be expensive, and yields even more troublesome. Besides we never heard of it beeing present in an APU), it must run at 1300 Mhz for 6 Tflops.

Maybe they don't plan to sell many, and the price will be high, like $599.

No. That's to do with low power and high power modes. What has that got to do with a new home console?

SCEA makes consoles. The patent is old but it shows the direction they were headed in. Obviously there will be an updated one closer to the release of PS4K.
 
Depends when it releases I think. If it releases next year I'd say definitely worth tho. 6 tflops would be beasting even among highest end PC GPUs.
The GeForce 1080 is 9Tf and next year's GPUs will be faster. But 6Tf will be more competitive for sure.
 
The GeForce 1080 is 9Tf and next year's GPUs will be faster. But 6Tf will be more competitive for sure.

The Geforce 1080 has Assync Compute problems... On some games using it, Fury X is not far, and it is old gen!

900x900px-LL-a9534c87_CBRrzpy.jpeg

Besides 6 Tf on a console dedicated hardware can be better used than if present in a generic PC!
 
The Geforce 1080 has Assync Compute problems... On some games using it, Fury X is not far, and it is old gen!

So the card is at fault rather than some games? Ummm.. I think I'll wait for a few driver and game updates before declaring the GPU the problem. This is the nature of PC hardware, radically new hardware is rarely great on launch day because software is optimised for older less-radical technology.

Besides 6 Tf on a console dedicated hardware can be better used than if present in a generic PC!

It's much easier to optimise software for a fixed hardware specification than every possible specification but if consoles are going to start shipping with more configurations, that optimisation period is going to be split across the various figurations. Wait, you didn't think publishers would pay devs to start dedicating more optimization time to cover each console variation did you? :nope:
 
So the card is at fault rather than some games? Ummm.. I think I'll wait for a few driver and game updates before declaring the GPU the problem. This is the nature of PC hardware, radically new hardware is rarely great on launch day because software is optimised for older less-radical technology.

There is no AMD optimization here... just plain assync on DX 12.

It may be driver related, but regardless several sources speak about it, and the benchmarks like the one I posted, show bad results.
http://www.thecountrycaller.com/592...-reportedly-facing-issues-with-async-compute/
 
Last edited:
There is no AMD optimization here... just plain assync on DX 12.
Nobody mentioned AMD optimisations. Past Nvidia implementation of asynchronous compute used a static particular of GPU resources for GPU and async resources but the 1080 is using dynamic load balancing - kinda of like what PS4 does, and presumably Xbox One too.

Load balancing is one of the most complicated problems to crack and I can't ever remember seeing an example of an implementation working at launch. It's something that needs release, observed, then tweaked.
 
So what you are saying is that Nvidia must, once again, run optimized and not generic DirectX 12 code?
Can Pascal run efficient Assync code without pre-emption? Real Assync Compute should be able to do this!
 
So what you are saying is that Nvidia must, once again, run optimized and not generic DirectX 12 code? Can Pascal run efficient Assync code without pre-emption? Real Assync Compute should be able to do this!

I didn't say that or anything like that. How about you stop putting words in other people's mouths, huh? :rolleyes:
 
The GeForce 1080 is 9Tf and next year's GPUs will be faster. But 6Tf will be more competitive for sure.

It very well might be that come next year chips are not any faster for same die area nvidia 1080 has. It might not be until 10nm that chips get significantly faster/less power consuming given same die area.

What we will likely see is big versions of existing chips pushing the tflop boundary. Smaller chips likely will get only marginally better as the 16/14nm process matures.

6tflop console is definitely possible but it has cost in die area and heat produced. For end user that means more expensive console that could potentially be more noisy and bigger than the lesser competitor(s). Producing those chips might also be problematic due to not being able to bin the chips.
 
Or maybe nV hardware just doesn't need async to fully utilize its potential.
Dynamic load balance between GPU and async compute is going to be an interesting thing to watch and a complicated problem to solve. You have finite resources in the GPU so when does the driver decide to allocate more for async compute rather than raw rendering? What if the async compute is critical to the rendering?

I think we'll see Nvidia tweak this a lot and perhaps even adding game-specific profiles to assist or steer their load balancing algorithm. I'm very interested to know whether the balancing is retrospective or predictive. It's not a million miles away from the problems of dynamic resolution scaling, it's always a case of, I wish I'd known that x cycles ago.

It very well might be that come next year chips are not any faster for same die area nvidia 1080 has. It might not be until 10nm that chips get significantly faster/less power consuming given same die area.

You may be right, maybe next years chips will just be more energy efficient versions of todays. Not very likely but you never know. The 1080 is doing almost twice what 980 is at far less power so my guess would be, fabrication permitting, next year's chips will just have more transistors and more cores and/or higher clocks.

6tflop console is definitely possible but it has cost in die area and heat produced. For end user that means more expensive console that could potentially be more noisy and bigger than the lesser competitor(s). Producing those chips might also be problematic due to not being able to bin the chips.

I'm not disagreeing, I was just pointing out that 6Tf is not "a beast" in terms of today's GPUs let alone what we may haven't year. I'm not scoffing at a 6Tf console at all. I was replaying Arkham Knight this afternoon on PS4 and constantly I'm thinking, how the hell did they do this on a console?
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top