Middle Generation Console Upgrade Discussion [Scorpio, 4Pro]

Status
Not open for further replies.
There's no way they're suddenly going to be like "oh, now we're going to put Zen in there" with 6 months to go. Maybe if it was discrete CPU and GPU, but it's an APU. These things take years to plan out.

Unless you're speculating about things other than zen. I think possible GPU up-flops are a distinct possibility, while hardly to be counted on.

Most likely clocks, and memory setup + amount.

I am thinking they were banking on the feasibility of 16GB of GDDR5x on a 256 bus. Less chips, smaller bus, lower power, smaller mobo, more ram in exchange for pricier chips.
 
could they ship it sooner than expected ? Like before summer ? Like annoucing it at E3 and then say "available tomorrow !"
They still have to wait for Vega11, node improvements, etc. to get to 6TF, i don't think they're gonna go any higher, achieving that target in 2017 shouldn't be any easier than achieving 4.2TF in 2016, quite the contrary.
 
DDR3 is roughly the same latency as GDDR5, however. When I cross-referenced the SDK for the ESRAM latency, the rough latencies for memory came out close. More came out of the delay getting out of the cache and through the uncore.

What usually happens is that GDDR5 is tied to a GPU memory subsystem that heavily emphasizes bandwidth at the expense of taking more time to coalesce and re-order accesses. DDR3 is usually tied to a device like a CPU that cannot tolerate that kind of latency, and so the memory subsystem fires off accesses sooner.
The internal arrays of GDDR5 and DDR3 are physically similar, if not the same. The difference is more in the choice of how many arrays the interface reads at once, and signalling. The interface and distance traveled doesn't add that much, so the dominant factors are the DRAM arrays (not different) or the memory controller and hierarchy (independent of the memory device).

edit:
For reference, here's where I tried to derive some of the values:
https://forum.beyond3d.com/threads/...are-investigation.53537/page-407#post-1816112
Another way of examining it is looking at cpu bottlenecked moments in multiplatform games that show a disparity greater than the 10% clockspeed difference 1.75vs 1.6. Digital Foundry has done 50 or more comparisons of games, and of the ones I've seen in those moments such as Fallout 4 1st Deathclaw encounter the Xbox had a 3 frame advantage over PS4, 29-30 vs 26-27, idential 10% clockspeed.
NX gamer did a test in this deathclaw scene using weapons to create lots and lots of alpha transparency effects to eliminate if its a memory bandwidth issue on PS4 part, and the framerate remained unaltered, thus leaving it most likely that it is cpu based bottleneck holding back ps4.
 
Another way of examining it is looking at cpu bottlenecked moments in multiplatform games that show a disparity greater than the 10% clockspeed difference 1.75vs 1.6. Digital Foundry has done 50 or more comparisons of games, and of the ones I've seen in those moments such as Fallout 4 1st Deathclaw encounter the Xbox had a 3 frame advantage over PS4, 29-30 vs 26-27, idential 10% clockspeed.
RAM latency is not issue on a console. It is essential to optimize cache misses.
 
There's no way they're suddenly going to be like "oh, now we're going to put Zen in there" with 6 months to go. Maybe if it was discrete CPU and GPU, but it's an APU. These things take years to plan out.

Unless you're speculating about things other than zen. I think possible GPU up-flops are a distinct possibility, while hardly to be counted on.

Personally I hope for more than just an cpu upclock but information to corroborate this is slim to none.

I was more quoting as the original tweet seemed to be about that white paper which df dissected, if his was different info and changed we cannot really infer change with the df contents with any confidence.

Looking his subsequent tweets do not disagree from the df report; did specs shift or just Microsofts become more bullish on their device. Less talk of technical measures to help and more sales "native 4k" "pretty pixels" etc.
 
Personally I hope for more than just an cpu upclock but information to corroborate this is slim to none.

I was more quoting as the original tweet seemed to be about that white paper which df dissected, if his was different info and changed we cannot really infer change with the df contents with any confidence.

Looking his subsequent tweets do not disagree from the df report; did specs shift or just Microsofts become more bullish on their device. Less talk of technical measures to help and more sales "native 4k" "pretty pixels" etc.
Well, I stand to be corrected here, but one of the major hurdles for consoles is the thermal envelope. If the technology just isn't there to keep the TDP low it's hard to imagine that when you've hit that thermal envelope that you'd try to jam more power into there; something needs to give and that may very well just be the number of operations the processing unit(s) can do.

That being said though, if you are thermally limited, it would make sense (at least to me) to focus more on rejection based techniques or load reducing for the processing units. Less 5% work here and less 10% there, start adding up enough savings and you're moving into some big gains.

An up clock and a load reduce would be sufficient I think in meeting a lot of targets.
 
Is there something else between Jaguar and Zen for the console market ? Or, if it's not Zen, then it's Jaguar 100% ?
 
Not sure how well it bodes with MS talking about half rate animation (which is already used in some games).
that can be very noticeble.
 
There is puma ? I think it's been referred to as a bug fixed jaguar here.

Puma fixed turbo and power management and I think also some kind of b0rken divide operation. Nothing that improved performance per clock. The 16/14 nm cat cores in the slims and the Pro are probably already as good they can be without further engineering work being done. If MS stay with the cat cores, PS4Pro speeds or perhaps a nominal increase are likely to be the best we see in Scorpio, as going further may be of questionable value if it eats too heavily into power/thermal luncheon box.

A ~20% bump in CPU from four year old low power laptop cores, despite a significant process advancement and newer and drastically improved architectures landing, would not be terribly exciting.

Perhaps larger and / or lower latency and faster caches (Jag l2 is half rate) might be easy to achieve on these much improved nodes, and be easy enough to shoot for. Something like that might give IPC a boost and keep power increases low.
 
Not sure how well it bodes with MS talking about half rate animation (which is already used in some games).
that can be very noticeble.

Frame interpolation should (should) make everything move more smoothly.

I could see a situation where you finish rendering a frame, and as you begin the next - to be delivered in 33 ms - you use compute to start work on an intermediate frame based on colour, depth and motion vectors from the last two or three frames. You deliver the interpolated frame after 16.7 seconds, while continuing work on the next 'proper' frame 16.7 seconds after that.

Given post processing and the usual heavy use of motion blur, this might make interpolated frames practically indistinguishable and give the appearance of full 60 fps.

Though input lag would still be that of a 30 fps game ... :S
 
Frame interpolation should (should) make everything move more smoothly.

I could see a situations where you finish rendering a frame, and as you begin the next - to be delivered in 33 ms - you use compute to start work on an intermediate frame based on colour, depth and motion vectors from the last two or three frames. You deliver the interpolated frame after 16.7 seconds, while continuing work on the next 'proper' frame 16.7 seconds after that.

Given post processing and the usual heavy use of motion blur, this might make interpolated frames practically indistinguishable and give the appearance of full 60 fps.

Though input lag would still be that of a 30 fps game ... :S
Perhaps if CPU is really that critical. Scorpio coming in at FL 12_2 would be huge. I would consider that a generation gap despite the modest power increase.
 
On the discussion on whether Scorpio is a new generation machine or not, Thomas Mahler from Moon Studios says that Scorpio is a brand new generation machine.
EAcJvbI.png
 
On the discussion on whether Scorpio is a new generation machine or not, Thomas Mahler from Moon Studios says that Scorpio is a brand new generation machine.
EAcJvbI.png

Next gen which only allows current gen SKU's...

Marketing must be bricking it.
 
Digital Foundry got a hold of a whitepaper sent to developers about future Xbox One/Scorpio/PC development.

Some tidbits:

No ESRAM in Scorpio.
Reiterates that developers will *not* be able to develop Scorpio exclusives. Any title developed for Scorpio must also run on OG Xbox One and they go on to suggest how that can be achieved while also maximizing Scorpio.
Suggestions that developers find ways to run their CPU stuff at half-rate (30hz) if they want to target 60hz which *may* indicate the lack of a significant CPU upgrade in Scorpio.
DF journalists are such little rascals. :LOL: They admit the papers are 6 months old. Things have changed. See also how they describe Scorpio now compared to how they described it a month ago.

Before

2TMEGFA.jpg


Now
zLL4D4h.jpg
 
Kinda need a good CPU for VR framerates, but it could also mean VR is kinda DOA.

It sure seems to mean that it won't be compatible with all XB1 games and accessories. Could just be some legal maneuvering to make sure people know Kinect, Kinect games and 1st gen controllers will no longer work with Scorpio.

Or... it could mean more than that.
 
FL 12_2?

Getting lost now , is that direct x feature level and optional tier?
Its a feature level, I'm not sure if optional is the right terminology. I would say the current standard is FL12_0 for features (XBO, PS4, 4Pro, GCN1 GCN2 GCN3, nvidia < Maxwell), but that's effectively just DX11.2+ in terms of feature set.
The real goodies arrived in FL12_1, FL12_2 and FL12_3 are just better implementations of FL12_1. Maxwell+, and i think Vega (still unconfirmed).

Would prefer not to label things as DX feature levels, since it's a MS construct, but at the same time it's a simpler method of categorizing cards.
 
Status
Not open for further replies.
Back
Top