Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Status
Not open for further replies.
Well I've thumb sucked a new theory based on absolute hearsay :LOL:

With the possibility of 1.8ghz GPU part it might mean they crunched the numbers and came to the conclusion that a higher clock and smaller chip for the GPU worked out better.

Leaving them more silicon for other special sauce such as possible Ray tracing tech.
Yea I was going to make some commentary here that it's going to be better if they go fast than wide. Once they go wide, everything has to increase in size to accommodate or bottlenecks will end any advantages you gain by going wide and such you lose real-estate for other silicon requirements.

Increasing the clockspeed drives improvements across the board, this seems a better bet to me.
 
Vega 64 and Radeon 7 are both at 300W, so there is that. Radeon 7 at 330mm2 is not a big chip. It's a new process, but it's still a medium size for a PC chip.
If you look at the history of AMD die sizes, there are only a handful of larger chips. If the scale includes 'gargantuan' at the top end, 350mm² is fairly typical of the 'large' dies used in flagship GPUs. Looking at unit sales of bigger, more expensive GPUs, it should be fairly straightforward for AMD to extrapolate 'it'll cost this much to make an 80 CU GPU, which'll be that big, which'll cost that much and sell for so-and-so, which'll sell perhaps this many, which'll mean that much profit/loss," and come to an informed decision. Which is to make such a thing. ;)

nVidia can push boundaries more because they are the stronger brand by far and can expect 4-5x as many unit sales for whatever they create, giving them the opportunity to create something at 4x the extravagance AMD can go to.

Also, we're talking consoles here rather than PC GPUs, where ~350 mm² is the entire silicon budget. Hoping for 80 CUs in a console part makes zero sense, save maybe if MS follow through with a monster creation commanding a monster price-tag.
 
If you look at the history of AMD die sizes, there are only a handful of larger chips. If the scale includes 'gargantuan' at the top end, 350mm² is fairly typical of the 'large' dies used in flagship GPUs. Looking at unit sales of bigger, more expensive GPUs, it should be fairly straightforward for AMD to extrapolate 'it'll cost this much to make an 80 CU GPU, which'll be that big, which'll cost that much and sell for so-and-so, which'll sell perhaps this many, which'll mean that much profit/loss," and come to an informed decision. Which is to make such a thing. ;)

nVidia can push boundaries more because they are the stronger brand by far and can expect 4-5x as many unit sales for whatever they create, giving them the opportunity to create something at 4x the extravagance AMD can go to.

Also, we're talking consoles here rather than PC GPUs, where ~350 mm² is the entire silicon budget. Hoping for 80 CUs in a console part makes zero sense, save maybe if MS follow through with a monster creation commanding a monster price-tag.

2012 was the last time AMDs flagship was at 350mm2, since then the die sizes have gone bigger. Right now AMD is hitting a power wall with their chips. The HBM memory is probably at least partly chosen to combat that, if they didn't have the power issue a GDDR5x/6 would have been cheaper option for them. nVidias designs are more power efficient and thus they can scale their dies wider and bigger. AMD needs an advancement in power efficiency to push wider and higher performance.

I agree that 80 CUs does not currently look realistic for a console :)
 
Well I've thumb sucked a new theory based on absolute hearsay :LOL:

With the possibility of 1.8ghz GPU part it might mean they crunched the numbers and came to the conclusion that a higher clock and smaller chip for the GPU worked out better.

Leaving them more silicon for other special sauce such as possible Ray tracing tech.

I mean this is sort what was done for Xbox One X. It only had four more CU's than Pro but +28% in clock speed.

Wonder if AMD learned some things with the development of Scorpio Engine that is being applied to the next gen consoles.
 
If they go the chiplet/infinity fabric route, would that help cooling if went fast not wide?
Guess would also help going wide, but I'm just thinking about fast at moment.
Wouldn't be trying to cool one monolithic die then, look at the package size of the epyc CPU's.
I'm also thinking if it had custom accelerators etc
 
I mean this is sort what was done for Xbox One X. It only had four more CU's than Pro but +28% in clock speed.

Wonder if AMD learned some things with the development of Scorpio Engine that is being applied to the next gen consoles.

I think the higher clocks on X are more of a result of Microsoft's vapor chamber design.
 
I think the higher clocks on X are more of a result of Microsoft's vapor chamber design.
Hovis method was a big part. I'm still of the belief it was a way for them to improve effective yields by tweaking the electrical delivery for chips with poorer characteristics rather than the one-size fits all (leading to drastically lower/more conservative clocks than typically seen in desktop counterparts, for example).
 
Hovis method was a big part. I'm still of the belief it was a way for them to improve effective yields by tweaking the electrical delivery for chips with poorer characteristics rather than the one-size fits all (leading to drastically lower/more conservative clocks than typically seen in desktop counterparts, for example).
Or it's possibly about DR FP16 having power and clock issues. We see this in AMD gpus having dramatically worse efficiency once they added this feature.

We have no idea what the power delivery tweak is. All we have is a useless car reference. And there must be a patent somewhere giving the real info about it. It's not easy to find, maybe it's filed but not published yet.
 
Or it's possibly about DR FP16 having power and clock issues. We see this in AMD gpus having dramatically worse efficiency once they added this feature.
Seems a flimsy correlation since that's not the only thing that changed between generations while I'm not sure what the effect is when it's not being utilized, nevermind the difference in the desktop branch.

DRFP16 didn't limit the base SKU's clock rates, for example. o_O edit: desktop 28nm parts went up to around 1050MHz (~25% slower than desktop, 800/1050, ballpark example).

Polaris 10 (RX 480) was boosting to 1266MHz, where a similar reduction by ~25% is in the 950MHz range. If you're going by base clocks, P10 was @ 1120MHz.

By RX 580, the process was mature enough to eek out a couple hundred more. If anything, I'd say the clocks circa 2016 were simply a fab maturity issue, but boosting beyond clearly takes some effort judging by how hot the Polaris series can get.
 
Last edited:
I think the higher clocks on X are more of a result of Microsoft's vapor chamber design.

Probably helped, but if you look at the power consumption of the X1 and the Pro (about 175W vs 155W under load) and consider the additional memory (maybe 10~15W) it's pretty clear the 1X is doing something new to reach it's high clocks.

1X is already a significantly bigger chip, so it's unlikely they get to be far more picky with yields given that the memory, U4K BR drive, and vapour chamber cooler will already be eating deep into that extra $100.
 
Probably helped, but if you look at the power consumption of the X1 and the Pro (about 175W vs 155W under load) and consider the additional memory (maybe 10~15W) it's pretty clear the 1X is doing something new to reach it's high clocks.

1X is already a significantly bigger chip, so it's unlikely they get to be far more picky with yields given that the memory, U4K BR drive, and vapour chamber cooler will already be eating deep into that extra $100.

Perhaps as touted their custom x86 CPU is actually that and further from Jaguar than they get credit for. It may well be fully based on Jaguar but perhaps it actually took a lot of Ryzens efficiency improvements and baked that in there to offset the GPU. All that tech seemed mighty close to Hovis in explination to me also, and perhaps makes a package of changes that allowed them to push the GPU clocks.

:?:
 
Seems a flimsy correlation since that's not the only thing that changed between generations while I'm not sure what the effect is when it's not being utilized, nevermind the difference in the desktop branch.

DRFP16 didn't limit the base SKU's clock rates, for example. o_O
Right, well it's the only post-polaris feature they added so I thought there was a (flimsy) corellation to be made for post-polaris having some efficiency issues at higher clocks.

Anyway, it would be fun to compare the VRM on a few boards to see what was done on X, if someone gets lucky to get the different ones. I have a guess based on something that was done by xilinx on their fpga, talking about process strength variability being a nice zero sum (in power) under most circumstances, but it was simply about changing the rail voltage to compensate which ends up the same power and slew rate for weak/strong batches.

It can't what the X is doing because it's just a VRM software tweak, and it seems a widespread practice. It also drifts with aging, so choosing m/b parts precisely wouldn't work unless it's broad values. AMD also have documented something similar. In order to involve external components, the only thing left is some impedance/filtering related stuff.
 
Current state-of-the-art 7nm, obviously.
Puh-lease, 7nm is soooo yesterday. It's been at huge widescale production for almost a full year.
Next you'll be telling me cheque and stripes are out. :nope: Us cool kids are romping the 5nm rainbow. :yes:
 
Hovis method was a big part. I'm still of the belief it was a way for them to improve effective yields by tweaking the electrical delivery for chips with poorer characteristics rather than the one-size fits all (leading to drastically lower/more conservative clocks than typically seen in desktop counterparts, for example).
I don't quite get this though. Surely the end result is going to mean some consoles that a notably louder and more power hungry, because they had poor components and need more juice? If electrical difference is minimal, then it can't be having a significant impact.

Or is it a case that the thresholds between success and failure are miniscule and that you just need to find the sweet-spot for 80% of failed parts to get them working? But then, why not just up the base electrics for all models to this 80% working?

:???:
 
I don't quite get this though. Surely the end result is going to mean some consoles that a notably louder and more power hungry, because they had poor components and need more juice? If electrical difference is minimal, then it can't be having a significant impact.

Or is it a case that the thresholds between success and failure are miniscule and that you just need to find the sweet-spot for 80% of failed parts to get them working? But then, why not just up the base electrics for all models to this 80% working?

:???:

X Hovis methodology tested: https://www.tested.com/tech/825498-xbox-one-x-and-hovis-method/

Rather than have a single power profile for all consoles manufactured, which would result in some generating excess heat by taking more power than components require, each Scorpio Engine processor has a custom power profile programmed onto the motherboard it's paired with in the factory. This process is referred to as the Hovis Method, named after Xbox engineer Bill Hovis. This means that Microsoft is able to net better yields of chips as opposed to a standard building process. Every system is highly power efficient, and the processors that require just a little more extra juice are now usable. For the consumer, this means that any two Xbox One X consoles quite literally aren't the same. Yes, they will all of course hit the same clock frequencies, data speeds, and everything else needed to run games identically. Some consoles however will draw more power than others, including my own.
 
So the net benefit is that some consoles can run cooler and quieter at the target clocks. Which means that the base volume level can be on average lower than worst-case, but worst case (and thus noisier) consoles are still usable. I don't think that really helps yields.

The most obvious technical improvement that could be made to improve cooling, improve quietness, and enable hotter chips to be used, is stop frickin' cheaping out on the thermal paste! Put something of quality between the die and heat spreader.
 
It's from Tested, so it could be a complete gibberish and made up explanation.

Norm from Tested was literally trying to explain to the engineer who designed psvr that the ps4's hdmi was unable to output 120hz and that his external box was doing frame interpolation. He continued to say it despite the engineer trying to stay calm and explain the ps4 does output 120hz for real and the game engine does temporal reprojection. His article contains only his delusions parading as an interview with sony.


EDIT: ish... The power measurement sure look like a really nasty noise roulette. Maybe there's something to it.
 
Last edited:
Put something of quality between the die and heat spreader.

Shameless plug... :mrgreen:
31rCivpsI1L.jpg
 
With a CU limit on Navi, doesn't that also firm up Anaconda having a secondary graphics chip, either RT assist or dedicate GPU alongside a soc. How can they achieve 'most powerful' otherwise?

Curious you mention that. The 4chan leak about the PS5 dev kit had a note stating that the sony docs are saying that only the GPU on the APU is working at the moment.

So... there might be a second GPU.
 
Curious you mention that. The 4chan leak about the PS5 dev kit had a note stating that the sony docs are saying that only the GPU on the APU is working at the moment.

So... there might be a second GPU.
Are you talking about actual console having second gpu?
Or you only talking about dev box.

By the sounds of it just means the CPU on the APU is disabled in the current iteration of the dev box.
 
Status
Not open for further replies.
Back
Top