AMD RV770 refresh -> RV790

I am well aware that speculation was allready made.
I was simply saying that I don't believe we will be seeing those product specifications in the RV790.
 
Could somebody speculate 200MHz GPU clock frequency increase 750MHz ----> 950MHz - how much performance gain do we really looking at anyway?
 
"Radeon 4890" The RV790 has 800SPu, as well as the RV770, however, the clock rates of 750MHz GPU (HD4870, RV770) on gargantuan 950MHz respectively. In the above configuration is a theoretical output value of over 1.5 TFlops.

http://74.125.93.104/translate_c?hl...0-bald&usg=ALkJrhhUdjM3EuvRxzejK_bMzeGHqrVN4g
http://translate.google.com/transla...103-rv790-vs-gt206&hl=en&ie=UTF-8&sl=de&tl=en

which we already reported that currently appears on the Radeon HD4890 speculated attention.

update...
http://translate.google.com/transla...es-nur-heisse-luft&sl=de&tl=en&hl=en&ie=UTF-8
;)
 
Uh, why? Clockspeed increases generally equate to near-linear performance increases in any compute-bound scenario.

Ahh... so 4850 is at a near 20% deficit in most games?
I recall seeing some overclocking results for RV770 and G200, if I recall correctly "near linear" wouldn't be the best way to describe them.
Since when are games usually "compute-bound?"
 
Ahh... so 4850 is at a near 20% deficit in most games?
I recall seeing some overclocking results for RV770 and G200, if I recall correctly "near linear" wouldn't be the best way to describe them.
Since when are games usually "compute-bound?"

There is more difference between the 4850 and the 4870 than mere core clocks. 4850 is very much bandwidth-bound due to its utilization of GDDR3.
 
There is more difference between the 4850 and the 4870 than mere core clocks. 4850 is very much bandwidth-bound due to its utilization of GDDR3.

Where is the link to the data showing the difference between the 4870 underclocked, both the core and memory?
Been trying to find it and can't. I coulda swore the link is from here.
 
HD4850->HD4870 is typically a 23-32% performance gain with 4xAA/16xAF at decent resolutions, when core clock is 20% higher.

With 8xAA/16xAF the gain goes up to around 45-50% in the best cases.

Essentially, at 4xAA/16xAF there's easily enough bandwidth to afford 20%+ boost for a 950MHz HD4890 in comparison with HD4870.

But, I dunno, it seems to me there prolly won't be any refresh.

Jawed
 
If Nvidia doesnt give them a reason to bring a refresh they wont.

Mayn people thought that the 55nm GT200 aka GT200b will get higher clks and be released as GTX 270 and GTX290 and now it seems that this aint happening.

RV770 has great yields and they can use the broken ones as HD4830 and even HD4810.
 
Mayn people thought that the 55nm GT200 aka GT200b will get higher clks and be released as GTX 270 and GTX290 and now it seems that this aint happening.

That's probably because gt200b can't beat rv770, even with higher clocks.
 
That's probably because gt200b can't beat rv770, even with higher clocks.

Beat it in what sense? GT200 is already faster than rv770 unless your talking about dual gpu's vs single gpu's?

On the single GPU front GT200b would only widen the gap but if NV wants to take the overall performance crown (i'm dubious about calling it that tbh) then it will need a dual GPU solution of its own without a doubt.

Dual 270's would probably do the trick if a 270 was a essentially a 260 216sp with slightely more clock speed. Whether thats feasable or not though is a different matter.
 
I mean in the sense of winning the performance crown

Yeah thats what I thought. As an architecture though, GT200 is already faster. It takes 2 RV770's to beat 1 GT200. So GT200b obviously won't be in any danger from that point of view.

The question is whether GT200b will allow NV to put 2 of them on a single board at greater than 260 clock speeds. Then ATI could be in trouble.
 
The question is whether GT200b will allow NV to put 2 of them on a single board at greater than 260 clock speeds. Then ATI could be in trouble.
I'd put that in the exaggeration basket. If such a response from Nvidia does come to frution, then it would be too little too late.

(Much like how ATI used to respond to Nvidia's products and then Nvidia would simply drop prices because they had an incredibly cheap product to begin with)
 
Back
Top