Middle Generation Console Upgrade Discussion [Scorpio, 4Pro]

Status
Not open for further replies.
I would like a more substantial proof that this happened than just a claim of a user in the comments section of a website!

This one:
http://digiworthy.com/2016/09/11/amd-zen-custom-socs-project-scorpio/

When will this nonsensical meme die? I wish people would practice basic reading comprehension. Every quote that I have seen has intentionally been silent on 2017 for zen APU. The complete degradation of basic reading comprehension in our society, which has become rampant (or at least better exposed) since the rise of social media, is driving me batty!!!
 
When will this nonsensical meme die? I wish people would practice basic reading comprehension. Every quote that I have seen has intentionally been silent on 2017 for zen APU. The complete degradation of basic reading comprehension in our society, which has become rampant (or at least better exposed) since the rise of social media, is driving me batty!!!

It was never backed up with a real Lisa Su quotation. But it didn't matter, it's what some people wanted to believe so I guess that was enough to cement it into alot of minds as the truth.

It's really amazing how this happens every single time there's an upcoming hardware release. No lessons of previous hardware releases are ever learned, so we just repeat the same cycles of grudge trolling and impossible spec hyping again and again
 
It's really amazing how this happens every single time there's an upcoming hardware release.
Not that amazing. Clearly some people like the sport that new hardware provides. Shouldn't be too long before we some trying their luck at 3D printed prototypes.
 
Has anybody here thought about the claim about this supposedly new low latency connection? Personally I have no experience with modern GPU programming but my programming knowledge is more in the low-level CPU/IO area. Let's assume this neogaf comment has some truth to it. What would this entail?

From my granted limited perspective I would assume the main latency issue in such APU and in this context would be the CPU cache.

If the CPU builds the GPU commands/descriptors list I assume they operate in standard copy back memory pages so the GPU DMA operation would be delayed either by the driver flushing the area or by the CPU's coherency dealing with this on (multiple?) L2s and the L3 which is surely not cheap.

How would they speed this up? Connect the GPU itself to the CPU Cache or use some special fast internal shared SRAM just for the descriptors to avoid CPU cache/memory coherency delays as much as possible? In the later case I can't judge if that's reasonable as I have no idea about the size of such descriptor lists for the GPU.

Any opinions here?
 
Not that amazing. Clearly some people like the sport that new hardware provides. Shouldn't be too long before we some trying their luck at 3D printed prototypes.

We already have 3d printed hardware ahead of release. There was a great switch fake and follow up video showing the build process.

I have seen Scorpio case photoshops but not a 3d print yet but no doubt it will come.
 
Let's assume this neogaf comment has some truth to it.
Why?
Any opinions here?
I think it silly and pointless to discuss something that might be a scientific impossibility. If you want to discuss ways to reduce latency between CPU and GPU, you'd be better off starting a thread in the 3D Tech forum to ascertain where the latencies are are, their impact, and what could be done to address them, leaving Scorpio completely out of the picture so as not to pollute the discussion.
We already have 3d printed hardware ahead of release.
Yeah, I meant 3D printed Scorpio prototypes. Haven't seen any yet.
 
  • Like
Reactions: Jay
Has anybody here thought about the claim about this supposedly new low latency connection? Personally I have no experience with modern GPU programming but my programming knowledge is more in the low-level CPU/IO area. Let's assume this neogaf comment has some truth to it. What would this entail?

From my granted limited perspective I would assume the main latency issue in such APU and in this context would be the CPU cache.

If the CPU builds the GPU commands/descriptors list I assume they operate in standard copy back memory pages so the GPU DMA operation would be delayed either by the driver flushing the area or by the CPU's coherency dealing with this on (multiple?) L2s and the L3 which is surely not cheap.

How would they speed this up? Connect the GPU itself to the CPU Cache or use some special fast internal shared SRAM just for the descriptors to avoid CPU cache/memory coherency delays as much as possible? In the later case I can't judge if that's reasonable as I have no idea about the size of such descriptor lists for the GPU.

Any opinions here?

I don´t know any of the specifics, but there could be some truth to it.

Starting with Vega and Zen, AMD introduces Infinity fabric, a new interconnect, on chip and off chip

http://wccftech.com/amds-infinity-fabric-detailed/

The guy clearly is not knowleagable enough, and it´s trying to impress its audience with many hiperboles, and missconceptions.

The other possibility is that it´s just a joke
 
Yea, I don't mind talking about things that are improbable but not impossible as it makes for interesting conversations, and what if's.
If you are against those types of speculation and discussions, then nothing wrong with discussing what you believe is the case and what can be done with it.

But, that "leak" I don't see anything remotely worth discussing Imo.
Or as Shifty said, pick out specific parts for discussion, but there was so much crazy talk in that comment.
 
Has anybody here thought about the claim about this supposedly new low latency connection? Personally I have no experience with modern GPU programming but my programming knowledge is more in the low-level CPU/IO area. Let's assume this neogaf comment has some truth to it. What would this entail?

From my granted limited perspective I would assume the main latency issue in such APU and in this context would be the CPU cache.

If the CPU builds the GPU commands/descriptors list I assume they operate in standard copy back memory pages so the GPU DMA operation would be delayed either by the driver flushing the area or by the CPU's coherency dealing with this on (multiple?) L2s and the L3 which is surely not cheap.

How would they speed this up? Connect the GPU itself to the CPU Cache or use some special fast internal shared SRAM just for the descriptors to avoid CPU cache/memory coherency delays as much as possible? In the later case I can't judge if that's reasonable as I have no idea about the size of such descriptor lists for the GPU.

Any opinions here?
It's almost certainly a load of rubbish, but if you want to assume some truth then my concerns would be whether whatever this new hardware is would require specific code to use or whether the api will handle it all automatically. If there's specific hardware that needs coding for then it's less likely to be used by third party devs, or won't be well optimised as we saw with ESRAM. With Scorpio I was hoping that MS would have learned some lessons and made the hardware work as others but be more powerful. Asking third party devs to put extra time in to utilise specific hardware for one console will more often than not result in a half arsed attempt or with the feature not being used at all. From what's been released Scorpio is sounding good. Nice powerful GPU, fast, larger, unified memory pool and I'm assuming a much better CPU as they've said the hardware is balanced. Unless the person that wrote all that crap can link us to the patent for this stuff then this is Mr X territory and it's better not to head down that rabbit hole.
 

What I speculated about here is below any game developer optimization.

I don´t know any of the specifics, but there could be some truth to it.

Starting with Vega and Zen, AMD introduces Infinity fabric, a new interconnect, on chip and off chip

http://wccftech.com/amds-infinity-fabric-detailed/

The guy clearly is not knowleagable enough, and it´s trying to impress its audience with many hiperboles, and missconceptions.

The other possibility is that it´s just a joke

Thanks, interesting link never the less as it's mostly AMD information I suppose. This looks more like a generic approach by AMD to simplify their design process with some potential internal speed advantages. I doubt it would have the impact the dubious original claim suggested though, nor can I see the connection with MS.

Why?
I think it silly and pointless to discuss something that might be a scientific impossibility. If you want to discuss ways to reduce latency between CPU and GPU, you'd be better off starting a thread in the 3D Tech forum to ascertain where the latencies are are, their impact, and what could be done to address them.

It's surely not a scientific impossibility to reduce latency between CPU/APU which makes the claim interesting to look into. It's more a matter of if the potential benefits are worth the needed design changes and what changes that could be to be patent worthy. If we can't even discuss this little tidbit why bother discussing this dubious post at all and argue about Zen or not and price for the nth time.
 
It's surely not a scientific impossibility to reduce latency between CPU/APU which makes the claim interesting to look into.
Not specifically. However, discussing rumours makes no sense especially when formed from fanboy delusions. It's like a rumour saying Scorpio is a handheld with a five hour battery life, and then saying, "okay, let's discuss how they could achieve this 12 GB 6 TF handheld". Clearly Scorpio isn't a handheld and so the viability of this handheld shouldn't be discussed in the context of Scorpio.
If we can't even discuss this little tidbit.
I didn't say don't discuss it. It's a subject that has merit (how to reduce CPU/GPU latency and is near zero latency actually possible). It just shouldn't be discussed here because it has no realistic bearing on Scorpio. If not for this collection of gobbledegook, the subject wouldn't have been raised in a 'mid gen console' thread.
why bother discussing this dubious post at all
We shouldn't. It was a shit post full of shit that only generates noise. I probably should have removed it as a Mod for the good of the Signal:Noise ratio, but I don't think us censoring what ideas do and don't get posted is a good way to manage things. I'd rather people have the sense to avoid obvious crap themselves, and if there's a meaningful topic derived from some crap, to give it meaningful coverage in a relevant environment.
 
While most of it sounded like bullshit the actual spec "guestimates" in that post actually at least seemed somewhat possible..albeit optimistic....much more than the recent spainish article..only thing was that was questionable is the GDDR5X....

6.2TF GPU @977MHz...is pretty specific...flops and clock speed also lines up with 50CU's

Also that digiworthy article is from september

Here is another one from 1 month ago: http://digiworthy.com/2017/01/08/project-scorpio-use-amd-raven-ridge-apu-ryzen-vega/
 
also...with regards to the digital foundry leak...has anybody speculated on what "4 times the L2 cache" might mean for cpu...Xbox One had 2MB unified of L2 cache per 4 cores...so 4MB in total.
 
While most of it sounded like bullshit the actual spec "guestimates" in that post actually at least seemed somewhat possible..albeit optimistic....much more than the recent spainish article..only thing was that was questionable is the GDDR5X....

6.2TF GPU @977MHz...is pretty specific...flops and clock speed also lines up with 50CU's

50 CUs would be four blocks of 12.5 CUs, so not likely.
 
also...with regards to the digital foundry leak...has anybody speculated on what "4 times the L2 cache" might mean for cpu...Xbox One had 2MB unified of L2 cache per 4 cores...so 4MB in total.
The context for the leak was for the GPU.
 
Here's my speculation based on the information we know.
Proof of speculation.
Exhibit A;
October 5, 2016. Phil Sspencer :"http://www.gamespot.com/articles/xbox-head-phil-spencer-talks-scorpio-ps4-pro-4k-re/1100-6444198/"
"We'd looked at doing something that was higher performance this year, and I'd say the [PS4] Pro is about what we thought--with the GPU, CPU, memory that was here this year--that you could go do, and we decided that we wanted to do something different."

To me this says one thing, that any CPU,GPU, and Memory technology available in 2016 is not in Scorpio.

Exhibit B;
Jan 5, 2017 " The Picture'
"http://wccftech.com/xbox-scorpio-amd-ryzen-vega/"

Exhibit C; Scorpio white paper
"4x L2 cache"

Based on these exhibits here's what i think will be in Scorpio.
CPU:
RyZen 8 cores Notebook based without SMT. SR5 @ 2.2 - 2.6Ghz
GPU:
Vega based 14mn "active" 54CU @ 900Mhz
Memory:
Power is why i chose GDDR5x 384-bit @ 3350Ghz 320GB/s

As far as cost Phil all ready said console level pricing or premium price.
Custom chips do not cost manufacturers hundreds of dollars, you must remember that these companies buy chips in bulk. Do you remember the leaked document from Microsoft that showed Xbox One SOC would cost exactly $26 and that's for all Technologies and Licensing of core Technologies. This SOC may cost MS only $30 - $40 Max. Example "A9 chip found inside of the iPhone 6s and 6s Plus. The estimate worked out to something in the range of $22 to $24, in line with an estimate provided by IHS iSuppli."
Scorpio start $399 and up
 
While most of it sounded like bullshit the actual spec "guestimates" in that post actually at least seemed somewhat possible..albeit optimistic....much more than the recent spainish article..only thing was that was questionable is the GDDR5X....

6.2TF GPU @977MHz...is pretty specific...flops and clock speed also lines up with 50CU's

Also that digiworthy article is from september

Here is another one from 1 month ago: http://digiworthy.com/2017/01/08/project-scorpio-use-amd-raven-ridge-apu-ryzen-vega/
The psychotic tone, the claims of connections with Microsoft, and all the CAPS, are enough to ignore it and move on. It could equally be trolling from an optimistic xbox fanboy, or trolling from an angry ps4 fanboy. Either way that person isn't sane.:runaway:

Specs are as optimistic as most guesswork during launch year frenzy. The recurring speculation of GDDR5X is par for the course compared to everything else.. It's still interesting because it's exactly 320GB/s on a 256bit bus. I dismissed it simply because it would mean the E3 presentation was an elaborate fake, but for the sake of argument...

With industry standard binning:
GDDR5 is 336GB/s at 7Gbps (384bit from the presentation)
GDDR5X is 320GB/s at 10Gbps (256bit speculated)

IMO, the most plausible explanation is that they can't get a consistent 7Gbps real world from the 7Gbps binning, and neither did Sony with the PS4 Pro. The Pro is 218GB/s from 7Gbps parts, so they are actually clocked at 6.8Gbps. It's a very similar AMD designed GPU, memory controller, same process. Pro have fewer channels, providing potentially better yield than Scorpio. It adds up to Scorpio's "at least 320GB/s" being 6.7Gbps. Coincidence? I think not!

With GDDR5, they might end up somewhere between 320 and 336, and they gave a conservative "at least" figure because they didn't have the final yield data yet? Or a final PCB revision either?

Also, GDDR5X means 8GB, while GDDR5 means 12GB. People speculating GDDR5X seem to be doing so for the buzzword, while GDDR5 might be a better choice for a lower cost per GB.
 
50 CUs would be four blocks of 12.5 CUs, so not likely.

Ms always talks about scorpio being Xone quality at 4K, so i assume that Scorpio will be just 48 CUs to make it simple lol
and it would make sense given the four times the L2 figure.

Did they say that was four times larger, or it had four times the L2 of XBO??
 
Ms always talks about scorpio being Xone quality at 4K, so i assume that Scorpio will be just 48 CUs to make it simple lol
and it would make sense given the four times the L2 figure.

Did they say that was four times larger, or it had four times the L2 of XBO??
Yeah that is probably right. 4X the CU's and 4X the L2 of Xbox.
 
Here's my speculation based on the information we know.
Proof of speculation.
Exhibit A;
October 5, 2016. Phil Sspencer :"http://www.gamespot.com/articles/xbox-head-phil-spencer-talks-scorpio-ps4-pro-4k-re/1100-6444198/"
"We'd looked at doing something that was higher performance this year, and I'd say the [PS4] Pro is about what we thought--with the GPU, CPU, memory that was here this year--that you could go do, and we decided that we wanted to do something different."

To me this says one thing, that any CPU,GPU, and Memory technology available in 2016 is not in Scorpio.

Exhibit B;
Jan 5, 2017 " The Picture'
"http://wccftech.com/xbox-scorpio-amd-ryzen-vega/"

Exhibit C; Scorpio white paper
"4x L2 cache"

Based on these exhibits here's what i think will be in Scorpio.
CPU:
RyZen 8 cores Notebook based without SMT. SR5 @ 2.2 - 2.6Ghz
GPU:
Vega based 14mn "active" 54CU @ 900Mhz
Memory:
Power is why i chose GDDR5x 384-bit @ 3350Ghz 320GB/s

As far as cost Phil all ready said console level pricing or premium price.
Custom chips do not cost manufacturers hundreds of dollars, you must remember that these companies buy chips in bulk. Do you remember the leaked document from Microsoft that showed Xbox One SOC would cost exactly $26 and that's for all Technologies and Licensing of core Technologies. This SOC may cost MS only $30 - $40 Max. Example "A9 chip found inside of the iPhone 6s and 6s Plus. The estimate worked out to something in the range of $22 to $24, in line with an estimate provided by IHS iSuppli."
Scorpio start $399 and up

I would say with those specs it would be $499....but I'd still buy it.
 
Status
Not open for further replies.
Back
Top