D
Deleted member 13524
Guest
Are the GDDR6X mentions coming from anyone other than Igor's Lab?
I think kopite7kimi in Twitter was the first to mention GDDR6XAre the GDDR6X mentions coming from anyone other than Igor's Lab?
Already 2 years ago Samsung announced 18 Gbps GDDR6.Are the GDDR6X mentions coming from anyone other than Igor's Lab?
But 18Gbps GDDR6 isn't GDDR6X.. it's still just GDDR6.Already 2 years ago Samsung announced 18 Gbps GDDR6.
GDDR6X is just someones imagination IMHO.
If GDDR6 can run at 18 Gbps, why would there be a need for GDDR6X (at 18 Gbps) ?But 18Gbps GDDR6 isn't GDDR6X.. it's still just GDDR6.
The chances that there's GDDR6X are really slim. None of the companies possibly involved (JEDEC, memory manufacturers) is known for keeping things like these under wraps so close to supposed launch and there has been literally no reports of anyone developing GDDR6X - hell, we haven't even reached the top speeds GDDR6 will go to yet.But 18Gbps GDDR6 isn't GDDR6X.. it's still just GDDR6.
The chances that there's GDDR6X are really slim. None of the companies possibly involved (JEDEC, memory manufacturers) is known for keeping things like these under wraps so close to supposed launch and there has been literally no reports of anyone developing GDDR6X - hell, we haven't even reached the top speeds GDDR6 will go to yet.
He made his name at Tom's Hardware and more specifically Tom's Hardware Germany (IIRC he bought Tom's HW Germany at some point and then switched it to Igor's Lab)Which makes it even stranger that this Igor's Lab guy seems willing to die on that hill. I haven't heard of the website before, but I'm seeing people claiming he's a trusted source..
Plausible. Not the triple split part, there isn't much difference between TMU and RT core usage of L1 (and you don't see distinction between cache used by TMU / direct access either). But the increased SM/L1 size is quite likely, as long as it fits within the power budget.For that reason, I could imagine that Gaming-Ampere retains the large(r) SM/L1-pool that GA100 has, maybe even hardcoded (BIOS-locked) in a triple-split to reserve the additional memory over Turing for raytracing.
That clock is about 400 MHz higher than a 2080ti. The 3080ti should have more SMS, so with that clock it would be more than 30% faster. This would have to be a 3080, I think.
I'm confused by what you mean here, 2080Ti boosts to ~1.8GHz and somebody ran it around the same clock as the 3xxx sample as got 14k graphics score.
https://www.3dmark.com/spy/9688137
Not sure why the leaker didn't find memory size or didn't leak it at this time.
2080ti has a listed boost clock around 1550 MHz. So that could be an overclocked 2080ti?
nvidia's listed boost clock are always surpassed by a wide margin since Pascal's release with GPU boost 3.0. It's the opposite of AMD who really have a boost clock with 'upto' qualifier.
My custom 1080Ti does 2GHz by default while its listed boost is only 1.7GHz.