So, do we know anything about RV670 yet?

And how fast did the 112SP G80 GTS showed up? Even faster. IMHO you are over-estimating the time Nvidia needed to make the change.

How fast did 112SP G80 show up? I have not seen any hard numbers regarding the time between Nvidia starting to bin 112SP G80s and the cards coming out. Have you? For all I know, they have been sitting on those chips for a year.
 
How fast did 112SP G80 show up? I have not seen any hard numbers regarding the time between Nvidia starting to bin 112SP G80s and the cards coming out. Have you? For all I know, they have been sitting on those chips for a year.
It came up as soon as the NDA was up and reviewers started poking questions about GTS positioning.

I believe Nvidia ships the dies and the AIB's disables/activates the TCPs, anyone who knows better correct me. I'll gladly accept. :smile:
 
uh that's going to depend on what you're running. you're not going to get 10 FPS in Crysis, because that'd be a 30% performance boost or so.

You can get a 21% +6 fps Performance boost in Crysis with going from 600MHz to 700MHz Clock with the 8800 GT, sure you never gonna reach that with the HD3850 Pro but it is most likely you will with the HD3870 :) going way up above 850 then so most likely 890MHz but the new 8800 GT cards will go even higher then 700MHz my estimation is 750+ then it will be Hard for the HD3870 (Reference Boards) to catch up.

Maximum G92 = 850MHz @ 1.3v
Maximum RV670 = Uknown (Ati says officialy 775-800, but i doubt that it must be much higher)
 
Last edited by a moderator:
Maximum G92 = 850MHz @ 1.3v
What's your source for that? Also, I'm sure it could go even higher for a very low-volume SKU: 65nm variability between wafers is fairly high, and G92's volume will be very high compared to G80's. So if you accumulate chips for a few months, you should be able to hit even higher clocks for a SKU that doesn't need to be as competitive in terms of perf/dollar.

Not saying they're doing that, but I think it's certainly worth pointing out. Even if there is a GX2, that might make sense because there will be 3-way SLI and 4-way SLI, but not 6-way SLI... I have no idea whether there will be a GX2 or not at this point, however.
 
Probably ATI's own catalyst driver-software overdrive utility might allow to hit 800Mhz+ for GPU.
To go back to RV670, that's an interesting point. ATI apparently made sure R600 GPUs could overclock acceptably via overdrive. Now that the HD3870's clocks have been decreased, I wonder if this will also be the case? It's a much higher volume part, so I'm not sure if they can afford that, but it might be interesting if so.
 
Probably ATI's own catalyst driver-software overdrive utility might allow to hit 800Mhz+ for GPU.

Possibly, but why would only 3870 have the "+" in it, since Overdrive works on 3850 too?

To go back to RV670, that's an interesting point. ATI apparently made sure R600 GPUs could overclock acceptably via overdrive. Now that the HD3870's clocks have been decreased, I wonder if this will also be the case? It's a much higher volume part, so I'm not sure if they can afford that, but it might be interesting if so.

True, true
 
Part of the reason I might go with Radeon HD3870 because for over-clocking headroom if ATI provides. I was also interesting how other people look at 55nm-RV670XT, that is why I made poll to see the results.
Decreasing GPU from 825MHz to 775MHz frequency; with my theory the reason behind to show that our"ATI" GPU could scale better than Nvidia. Edit: Also to reduce power consumption.
 
Last edited by a moderator:
On the foundry floor, maybe. Once it shipped out, I am not so sure.

Not going to happen in the fab: you want to disable the units that are actually defective, so it's better to do this after packaging and final testing, which usually happens somewhere else.

In theory, it could even be done as the chips are leaving the warehouse on their way to the customer. Useful when you want to cripple fully working dies into a lower-end part as this gives you the most flexible way to manage inventory, but this complicates logistics in other ways, like the necessity to keep track of each individual chip.

Blowing fuses is a matter of applying a higher voltage to dedicate pin, followed by some kind of programming sequence. Very much like programming a flash rom. In theory, it can be even done on the PCB itself.

There's no way you're going to let customers blow fuses: maybe it's different in the GPU world (with carefully selected and trusted board makers), but it's not unusual for shady customers to try to put low-end versions on a high-end board and sell it as such.
 
Noted. Which leaves the question: if you are going to change the configuration of the chip, how much time do you need before it hits retail? Even such mundane thing as printing boxes with correct specs takes time.
 
Ok here's a question. If people, using extreme means, were able to clock R600 to over 1GHz, and RV670 is 55nm then what chances are there that the chip has a lot of headroom to increase clocks? It sounds like they're targeting a low price/power envelope but I would think the clocks would scale up dramatically from where they're at now if proper cooling was used. If this is the case why wouldn't someone smack a 2 slot cooler on one and crank the settings up around the power usage that R600 had if the chip could handle it?
 
How likely is it the power density would be the measure limiting clockspeeds over simply total power output? It's seems to be semi confirmed the HD3870 does come equipped with a dual slot cooler.

The leaked slides show 3870 with dual slot cooler, I would consider that 100% confirmed rather than semi confirmed
 
What's your source for that? Also, I'm sure it could go even higher for a very low-volume SKU: 65nm variability between wafers is fairly high, and G92's volume will be very high compared to G80's. So if you accumulate chips for a few months, you should be able to hit even higher clocks for a SKU that doesn't need to be as competitive in terms of perf/dollar.

Not saying they're doing that, but I think it's certainly worth pointing out. Even if there is a GX2, that might make sense because there will be 3-way SLI and 4-way SLI, but not 6-way SLI... I have no idea whether there will be a GX2 or not at this point, however.

Look here http://www.xtremesystems.org/forums/showthread.php?t=163929 also the first Hwbot results are incoming (2 Phase Reference PCB)

cewolf


Also i have a theory about the Overclockability of the HD3870 and that is look @ the TDP it is now max 105W before it was 135W so there you go 30W Overclock buffer for the PCB now you just have to calculate how much MHz would be 30W ;) or do you wan't to tell me that the -50MHz reduction from 825MHz are those 30W :D wouldn't fit to ATIs marketing well "Performance per Watt" wouldn't it? ;)
But very clear for me is that 775MHz wouldn't be enough to beat the AVG 700-720MHz of the 8800 GT, my guess is 10W = 100MHz *3 = 300MHz + 775 = 1075MHz = 30W
 
Last edited by a moderator:
The best overclock I can find around for G92 is this: http://www.xtremesystems.org/forums/showpost.php?p=2539098&postcount=172 - which is a tiny bit better than 850@1.3v, but indeed it's pretty much in that ballpark.

However, the 8800GT is NOT the top bin most likely, so my point stands: given the volume, a SKU with less than 1/10th the volume of the 8800 GT could be cherry picked and go even higher without significantly too exotic cooling solutions. A mass-market SKU at 800MHz+ also seems fairly easy to pull off to me given that, and that'd be 50%+ faster than a GT.

Anyway, back to RV670: the way these things work is a chip needs to be able to attain a given clock at a given voltage range, within a given TDP. One possibility for the HD 3870 is that they test it for the retail clock at a given voltage, and then at an 'Overdrive' clock at a higher voltage.

The impact on yields of that should be smaller than artificially increasing the require clocks for the chip to be called a HD 3870, even though it could handle the retail clocks. However, I don't know if overdrive can change the voltage?

Another possibility is they just test for the TDP at 775MHz but not at the higher frequency, even though all the chips in that bin could easily attain that frequency. This would make sense if leakage variability was their main problem. If clock variability was less severe, then this would make a fair bit of sense too as the impact on yields should also be minimal. Not a bad trade-off either way if marketed properly!
 
Back
Top