Nvidia Ampere Discussion [2020-05-14]

Are the GDDR6X mentions coming from anyone other than Igor's Lab?
 
But 18Gbps GDDR6 isn't GDDR6X.. it's still just GDDR6.
The chances that there's GDDR6X are really slim. None of the companies possibly involved (JEDEC, memory manufacturers) is known for keeping things like these under wraps so close to supposed launch and there has been literally no reports of anyone developing GDDR6X - hell, we haven't even reached the top speeds GDDR6 will go to yet.
 
The chances that there's GDDR6X are really slim. None of the companies possibly involved (JEDEC, memory manufacturers) is known for keeping things like these under wraps so close to supposed launch and there has been literally no reports of anyone developing GDDR6X - hell, we haven't even reached the top speeds GDDR6 will go to yet.

Which makes it even stranger that this Igor's Lab guy seems willing to die on that hill. I haven't heard of the website before, but I'm seeing people claiming he's a trusted source..
 
Which makes it even stranger that this Igor's Lab guy seems willing to die on that hill. I haven't heard of the website before, but I'm seeing people claiming he's a trusted source..
He made his name at Tom's Hardware and more specifically Tom's Hardware Germany (IIRC he bought Tom's HW Germany at some point and then switched it to Igor's Lab)
 
For that reason, I could imagine that Gaming-Ampere retains the large(r) SM/L1-pool that GA100 has, maybe even hardcoded (BIOS-locked) in a triple-split to reserve the additional memory over Turing for raytracing.
Plausible. Not the triple split part, there isn't much difference between TMU and RT core usage of L1 (and you don't see distinction between cache used by TMU / direct access either). But the increased SM/L1 size is quite likely, as long as it fits within the power budget.
 
Supposed Ampere card in 3DMark Time Spy
https://hardwareleaks.com/2020/06/21/exclusive-first-look-at-nvidias-ampere-gaming-performance/

3DMark System Info says:
Ampere-TS-leak.png

  • GPU vendor : NVidia Corporation
  • GPU core clock : 1935MHz (Time Spy only reports boost clocks)
  • Memory clock : 6000MHz
  • 18257 Graphics points (~2080 Ti +31%)
6000 MHz memory clocks don't really reflect any known memory type clocks (16 Gbps GDDR6 would show up as 2000 MHz), but it's not uncommon for 3DMark to read clocks wrong on unreleased hardware IIRC
 
That clock is about 400 MHz higher than a 2080ti. The 3080ti should have more SMS, so with that clock it would be more than 30% faster. This would have to be a 3080, I think.
 
That clock is about 400 MHz higher than a 2080ti. The 3080ti should have more SMS, so with that clock it would be more than 30% faster. This would have to be a 3080, I think.

I'm confused by what you mean here, 2080Ti boosts to ~1.8GHz and somebody ran it around the same clock as the 3xxx sample as got 14k graphics score.

https://www.3dmark.com/spy/9688137

Not sure why the leaker didn't find memory size or didn't leak it at this time.
 
TimeSpy doesn't care about listed clocks, it reports what it sees (what it sees or rather how it reads what it sees, it could of course be off on unreleased hardware if something related to clockreporting has changed)
 
nvidia's listed boost clock are always surpassed by a wide margin since Pascal's release with GPU boost 3.0. It's the opposite of AMD who really have a boost clock with 'upto' qualifier.

My custom 1080Ti does 2GHz by default while its listed boost is only 1.7GHz.

I didn't realize they boosted so far beyond their listed clock speeds. I have a gtx1660, but I've never really payed attention to the clocking behaviour.
 
Back
Top