Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
If you only have 100% perfect chips for Anaconda, you'll have a very small pool of chips to use. Let's say 10% would be perfect, all 64 CUs working, that means 90% of your consoles are going to be Lockhart and only 10% Anaconda. Great if that's what your market wants, but what if it wants Anaconda:Lockhart in a 60:40 ratio?

You have to design your chip to be able to produce the numbers needed. That means disabling some CUs to be tolerant of defects. Only the PC space with its rare super-high-end market can afford defect-free chips and products.
Yep don't disagree with this. That's why I said cost and could potentially end up having to use a lot of good chips for Lockhart etc.
If they choose to use XSX in cloud that could also make the ballance of the waffer usage different also though.

But I disagree with you have to have disabled CU's, and that 64CU is impossible. (that's what was replying to, not yourself )

Usual small print, not saying I have a view on what their doing, just discussing pros cons etc
 
A 13,3 Tflops RDNA is not 2080 Ti level power I think.

No, just 0.1TF off then :p

Regardless, I find it frustrating that MS is not making things crystal clear to the tech community in their reveal in terms of laying out all the relevant specificity of the gpu power thus causing some confusion.

That might be because the tech community isn't that big in the total market share of people buying (xbox) consoles.

If you follow Shifty's logic; you'll see why tunnel visioning on just 'FLOPs' is the flaw for the argument

So, how is a 5700XT twice as powerfull as the one X/RX580 then, with 9 to 10TF?
 
No, just 0.1TF off then :p



That might be because the tech community isn't that big in the total market share of people buying (xbox) consoles.



So, how is a 5700XT twice as powerfull as the one X/RX580 then, with 9 to 10TF?
Bottlenecks moved/relieved.
 
Ok, 12TF it is then for XSX (not used to saying XSX lol), then i wonder why DF didn't just say ~12TF, but instead being optimistic.
Because they have the integrity to report correctly. It is not reported 12 TF. So they did not echo it. DF and others are doing the right thing by leaving the door open for a margin of error in particular because Phil Spencer left that door open.
 
Because they have the integrity to report correctly. It is not reported 12 TF. So they did not echo it. DF and others are doing the right thing by leaving the door open for a margin of error in particular because Phil Spencer left that door open.

Ok, then we will have to wait about what kind of specs we get, and ofcourse continue to guess and decrypt all the PR talk.
I see many 12 to even 15 or even higher TF leaks (resetera/gaf). Though, the source is always a friend that's the leaker. Ive not seen a leaker himself posting specs, yet.
 
Regardless, I find it frustrating that MS is not making things crystal clear to the tech community in their reveal in terms of laying out all the relevant specificity of the gpu power thus causing some confusion.
Where as for me it gives me a base/floor, i.e. looking around min of 9TF regardless.

Was I expecting twice the performance yea, still nice to hear that though. Be even nicer if/when they say 12.x TF.

From the general conversation.
They don't need to measure it in all work loads, also they said by their maths which could easily mean their benchmarks they chose to focus on.
Does gpu performance depend on other factors sure it does, doesn't mean you can't say x gpu is twice as fast (by my maths)
 
We can all stop fantasizing about the whole 64CU thing activated on XSX, Klee just implied he only previously confirmed the tf number, not the rest (64CUs at 1475mhz).

https://www.resetera.com/threads/ne...8-the-dark-tower.159131/page-76#post-27484006
Not like we wrote its bs here here yesterday, but good that Klee confirmed it.

Btw he said he has a friend in one studio, but I seriously, seriously doubt he would get all the different info he is spreading on Resetera.
 
I'm not even sure why the 64CU was seen as such a positive.
Was it because slight boost in frequency would have a bigger effect before release?
I said maybe going wide may help with RT, but maybe going fast does.
Point is I don't know why 64CU's is some kind of holy grail here
 
Btw he said he has a friend in one studio, but I seriously, seriously doubt he would get all the different info he is spreading on Resetera.

I was wondering that in another post, but isn't this the perfect way to attain massive attention, by reffering to a friend as a source, that when the real specs are different, he can say his source had it wrong? I mean, this guy is never lying then, but got away with upto 6 months or more of attention.
 
I think the 'problem' is that it seems unbelieveble that AMD can actually compete with a 2080Ti out of the blue, and that in a console APU/SOC. They must have some real monsters ready @AMD for the dGPU market, ready to destroy anything nvidia has and is ever going to get.
 
I think the 'problem' is that it seems unbelieveble that AMD can actually compete with a 2080Ti out of the blue, and that in a console APU/SOC. They must have some real monsters ready @AMD for the dGPU market, ready to destroy anything nvidia has and is ever going to get.

Like I said before there is a rumor of a AMD RDNA 2 80 CU dGPU PC, die size 505 mm2.
 
I think the 'problem' is that it seems unbelieveble that AMD can actually compete with a 2080Ti out of the blue, and that in a console APU/SOC. They must have some real monsters ready @AMD for the dGPU market, ready to destroy anything nvidia has and is ever going to get.

If AMD can effectively integrate two or more GPU chiplets on a PCB around its infinity fabric design where the GPU chiplets communicate as one (or viewed as a single chiplet), then Nvidia may have a problem on its hand. Because SLI/NVLink GPU scaling is virtually dead within the PC gaming space. And if AMD can scale performance (100%) across GPU chiplets similar to its Ryzen/Threadripper CPU chiplets, then PC gamers are going to be even further ahead when it comes to raw GPU floating-point performance.
 
Status
Not open for further replies.
Back
Top