Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Globalisateur u ok bro?
It’s late in France. Like 2-3am his time. Give him time. It’s not easy following the discussion; much less participate it in, more or less I sit around waiting for a proper written summary.
 
What happens when you use 16Gbps chip instead of 14Gbps one, which are used as default for theoretical value?

I suppose AMD will tell Sony or MS "Ammm...guys, you cannot do that. We put 448GB/s and 560GB/s as theoretical maximum soo...please remove 16Gbps chips and use 14Gbps ones, because we cannot have it higher then theoretical"

Simple answer is that Sony is allowed to use faster GDDR6, and MS is not.

Which makes sense because Xbox gets no funding, the developers are all volunteers.
 
Interesting tidbits from HDT folder regarding Oberon...

They had OPN number back in February already. Its looks familiar as well!
Is the second ID number ending in F10 supposed to be in the sequence of PCI IDs?
Seems like a lapse to stop counting in hex.

What happens when you use 16Gbps chip instead of 14Gbps one, which are used as default for theoretical value?

I suppose AMD will tell Sony or MS "Ammm...guys, you cannot do that. We put 448GB/s and 560GB/s as theoretical maximum soo...please remove 16Gbps chips and use 14Gbps ones, because we cannot have it higher then theoretical"

It can be limited by the specifications for the memory controller, or what was negotiated for the design. We don't have external reference for what AMD considers the upper limit before needing an upgrade to the involved units. There have been instances where targeting a lower range yielded area and power savings, or leaving headroom for higher clocks did the opposite (ex. Tahiti's much larger GDDR5 PHY versus Pitcairn/Hawaii).
In other cases, there may be a lack of timing settings or multipliers for a given clock, or the settings exist but AMD did not really implement working silicon for certain combinations (ex. test settings in the BIOS for infinity fabric 1:1 timings in Zen 1 could be selected and would lock up the system.)

Depending on the maximum AMD has validated the controllers for, it may leave the chips the client buys without a guarantee that they will be considered viable if parts are run past the specification agreed to. If AMD is responsible for validating good die, AMD may be able to refuse checking if the chips will fail at 16 Gbps, and the purchase agreement for chips vetted for 14 Gbps wouldn't give the customer grounds to refuse buying them.
I suppose the buyer could then take on the redundant expense of re-validating chips on their own, though discards due to failures to meet the upclock would not be AMD's problem.
Taking the chips out of spec without agreement with AMD may also weaken any claims against AMD if something like premature failure or hardware flaws are encountered later, and AMD can point to the silicon being pushed out of spec.

If the customers do their validation, perhaps it's a question of if there's a larger percentage of lost chips or if their validation is thorough enough if the speeds are significantly beyond what they're set up to test.
 
It also depends on when the decision was made.
And why.
During testing did they realize that the bandwidth wasn't enough?
Because they wouldn't just up the bandwidth for no reason.
Either gpu or cpu had an increase or their simulations underestimated how much bandwidth was required.

Could definitely happen, but there would be a reason, and impact of change is different dependent on when it happened..
 
MS would have built these overhead into the memory controller if they think there is a chance they'll need to upgrade to faster chips. They're paying for die area at the fab, so I don't think there is money saved by limiting the speed of your memory controllers.
 
MS would have built these overhead into the memory controller if they think there is a chance they'll need to upgrade to faster chips. They're paying for die area at the fab, so I don't think there is money saved by limiting the speed of your memory controllers.
The analog drivers are not designed by MS, they are at the limit of what the process can drive at these high currents (slew rate becomes a big deal) The vendors designing the PHY would have tested the yield at specific processes for specific frequencies. If AMD cannot reach 18gbps equalization with good signal integrity, it's game over already. Memory bins are in ideal conditions, the client PCB might not meet the ideal specs either.
 
Last edited:
I think higher TF require higher bandwidth so if 9.2 TF was (is ?) true (as it seems) and then Sony afterwards wanted to push the silicon till 10.2 TF (my idea & others idea) they had to change RAM chips accordingly... spending more on cooling & RAM.... they can reach this maybe just delaying a bit the console in quantities... Maybe just few months.... Don't know.... Baseless.
 
But that's what Sony said in 2016, please me is not helping, not that i am a fan of what Sony's marketing of PS5 right now though.

Sorry, I didn't follow this.

By that metric almost everything that costs more than $200 is niche.

It's not necessarily about cost. My post was in the context of grabbing maximum media attention for marketing purposes for a video games console. 15-20 years ago, video game consoles (and PCs) probably represented 95% of the ways the entire video game market consumed video games. Now it's probably more like 5% because of mobile and web-based based video games - regardless of the insane profits of some companies. New video game consoles used to get significant attention from mainstream media, now it gets far less so because who cares about a tiny entertainment ecosystem that sells 20-30m/units a year.

I was frankly astonished when watching the breakfast news on 23 February 2013 there was a short, maybe 30 seconds-long, piece on Sony announcing PS4. The same did not happen for Xbox one in May, nor PS4 Pro in 2016. I guess 22/23 February 2013 was a slow news day.

As electronic devices go it's definitely just a fraction of things such as smartphones, TVs, media players, etc.

Context! Marketing. Media interest in new video games consoles. That is what my post was in relation too.

Wow, I wouldn't have picked you for using nonsense stats. You've selected the world population of individuals which clearly isn't the market for a great deal of things including consoles and the media audience you're talking about, and then picked unit sales for a device that services the whole household. Why not compare the 'Western' market data as you're talking about what should interest the 'Western' media? What proportion of Brit's care for video games such that the BBC will or won't care to cover them? eg. 44% of UK households have a console.

And more than 90% have a fridge and TV, more people have tablets than consoles and proportion have a car. A new video games console just aren't a product that is particularly newsworthy - more so when it's here's a new one, it's got some features you won't understand and is a nit faster than the old one :runaway:

BORING. Hence my assertion that if Sony want media to turn up to an event for media, they need to hold that event where media already area, or give them several weeks notice. Then you may get this kind of coverage, which is what they want. For PS4 Pro, Sony had to bankroll the costs of getting journalists to attend the reveal event in New York!
 
I hope 9.2 TF doesn't hold back their vision, if Sony have any Souls, they would grant them 12+TFs of power to reach that benchmark.

Then you're in luck, since 9.2TF of RDNA will outperform 12TF of GCN in everything but compute.

I'd prefer 12 RDNA TF's, personally. Heck, I'd prefer 20. But the GitHub leaks are the most credible we've seen so far, and point to a console that can slap about a Vega 64.
 
Status
Not open for further replies.
Back
Top