Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Call it all off guys, this is from Jen Hsuang himself at Nvidia conference in China Dec 19

unknown.png


timestamp ~21:15
 
What is that supposed to mean? PS5 is going to beat 2080/ti.
The graphic is saying that the RTX 2080 is faster than next gen consoles.
Looks like he was talking about RT at the time, not watched it all so don't know if he only meant in RT or generally. Gaming laptops at that point hence the weight.
 
What is that supposed to mean? PS5 is going to beat 2080/ti.


Something around 9-10 TFlops RDNA/2 might be competitive with 2080 no? If you buy 10 TF PS5 rumors or think that when MS said SeriesX=2X X1X they were scaling RDNA flops (Eg 9TF RDNA=12TF GCN) then it's not completely unreasonable. Shrug.

I mean on Era half the laypeople met this not with disbelief but with "No duh GTX 2080 =$600" etc etc.

5700XT=9.7 TF according to google.

According to Techpowerup graphs (admittedly somewhat Nvidia biased benchmarks vs other sites but ought to be close enough) 2080 is ~20% faster than 5700XT.

Next console ought to be RDNA2, but are we expecting large efficiency gains? Doubt it. Mostly will be the introduction of RT hardware I assume.

And thinking it through further, IIRC 2080 is a HUGEEEE chip, granted they wasted large amounts on RT cores, next gen might not be any different in that respect. The one they busted the reticle limit for. Next consoles also have to cram in 8 Zen 2 cores. Yeah 2080 level GPU probably isn't that unrealistic at all, not saying it's true.
 
Last edited:
The rumour that AMD's next high-end would end up around 2.7x GTX1080 performance has been around for some time - I do believe it's going to be something special given they astonished Samsung enough to the degree that they licensed it.
 
RTX 2080 Max Q in the Lenovo laptop is 6.5TF. the console mentioned in the picture is Lockhart.


Can you read chinese and it mentions lockhart on the slides or is this just your speculation?

Good point on the Max though.

Left all my analysis vs a desktop 2080 though because i think it's still legit. We should be pretty happy with 8 zen cores and a 2080 gpu for next gen.
 
Last edited:
The rumour that AMD's next high-end would end up around 2.7x GTX1080 performance has been around for some time - I do believe it's going to be something special given they astonished Samsung enough to the degree that they licensed it.
The 5700XT is about 20% faster than GTX 1080, that would mean the big Navi is 250% faster than 5700XT, when was the last time we ever saw such huge sudden gains in any tech sector? Talk about logic!

LOL, these aren't rumors, they are pure fantasies.

New This guy is gold
Talk about pulling stuff straight out of his ass, it's now better than two titans!

 
Considering how many deals NV is making with the major players in China, it's more likely to be a Chinese console of some sort.
I don't believe that for a second, NVIDIA never cared for any console comparison except the Xbox and PS.
RTX 2080 Max Q in the Lenovo laptop is 6.5TF. the console mentioned in the picture is Lockhart.
Could be, could also be talking about RT, considering his whole speech was about RTX and RT games.
 
He thinks 5700 is powered by only one 8 pin?
"... what 2x 8 pins can't beat a 2080ti? Cause 2 5700xt's certainly can."

He also thinks AMDs patent leaves CUs more available to other tasks than NVs solution?
"and the shaders can do other shit in the scene alongside."

He definitively is a 'aspiring' author - that's right for sure :)
 
Speculation: XSX's SSD is only 2GB/s (as per rumor) because XSX's hardware ASIC decompressor can only decompress 2 gigabytes of data per second in real-time. What's the point of going higher!?
 
that would mean the big Navi is 250% faster than 5700XT, when was the last time we ever saw such huge sudden gains in any tech sector? Talk about logic!
When you come from a position of a weak architecture with obvious lackings (Tiling, compression for example) and a history of horrible physical implementations, I don't see it as unjustifiable. 5700XT is running at very inefficient frequencies in the power curve, running 80CUs slower along with µarch and physical improvements doesn't sound outlandish to get you near 250%.
 
When you come from a position of a weak architecture with obvious lackings (Tiling, compression for example) and a history of horrible physical implementations, I don't see it as unjustifiable. 5700XT is running at very inefficient frequencies in the power curve, running 80CUs slower along with µarch and physical improvements doesn't sound outlandish to get you near 250%.
No, that sounds exactly outlandish, we NEVER had this sort of scaling in the entire history of dGPU market, even the mighty 8800GTX barely exceeded the 150% mark (vs older generations) in select situations. What you are talking about here is a monumental leap in efficiency, transistor manufacturing and design, one which should've at least trickled down to small Navi and it's brothers, instead we have 7nm Navi barely competing with 12nm Turing.

Again, this sort of scaling never happened across different arch generations on different nodes, let alone within the same gen and the same node.
 
You're being shortsighted. The dGPU market isn't where the design innovation is coming from, the mobile vendors are the leading edge silicon designers because of competitive necessity. Zen most famously somewhat caught up to design features that were ubiquitous in mobile since 7-8 years, and AMD has very much said that they're wanting to implement the same design methodologies they used on Zen to their GPUs, essentially starting with RDNA and continuing to do more of it. Look at the absurdity of what Apple is doing right now in mobile - they're outright destroying everybody else through phenomenal architecture and physical implementation to the point it's not even a competition anymore. Now imagine that actually applied to the stagnancy we've seen in the dGPU market.
 
We should be pretty happy with 8 zen cores and a 2080 gpu for next gen.

Indeed we should be, as a 2080 will be matched by a 3070 early next year, perhaps even a 3060 is coming close considering 7nm and a almost two year timeframe. Console matching a 2018 GPU, we should atleast get. Also, we can't go down from current gen's 8 cores either, and they will be lower clocked then the zen3 pc variant.
 
The dGPU market isn't where the design innovation is coming from, the mobile vendors are the leading edge silicon designers because of competitive necessity
Mobile is not dGPU, it's constrained by power, memory bandwidth, and by years of sub optimal old archs, it's easy to achieve great scaling numbers on new nodes because of that.
Zen most famously somewhat caught up to design features that were ubiquitous in mobile since 7-8 years,
Zen never had a 250% moment vs it's previous generation.
Look at the absurdity of what Apple is doing right now in mobile
Apple barely had a 200% lead over it's previous gen on a new "node"

You are saying that within the same node "7nm", a high end GPU will be 250% faster than it's middle end brother, and both use the same overall general arch, excecuse me if I don't believe this one iota, it's simply absurd.
 
Status
Not open for further replies.
Back
Top