AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
They may be starting with 40nm, but unless they planned to release an x2 part, or planned to keep an option of an x2 part (in reserve, should it be needed), it seems most likely that they would have used nearly all the power budget to build an uber GT300.

Any particular reason they would not do that?

We're off topic in this thread; it would be better if we'd continue this kind of conversation in the relevant thread instead. But since we're here a very good reason to not do that is while they changed their so far strategy to not go for the smallest available high performance process up to GT200 and went for 40nm with D12U, why would they also be as dumb to maximize the risk factor which is already high with such a new process. Or even better if they would have as you say, make an estimate when they'd be able to release that one even that now TSMC's 40G is on an upswing.
 
It was 90W for the initial 4870's, 4890 was 60W - you're about half the way there! ;)

There were changes to the ASIC design to make that happen and that was a direct result of what we learnt with RV7xx.

Yeah but the 60W with 4890 was only on reference bios or something right? Most reviews I saw showed 90 not 60, and that was the reason figured on these forums.

I have some kind of powercolor model, it's overclocked, and it definitely has a non reference fan, it's pretty annoying actually apparently programs like Rivatuner cant read the VRM temps and such like that, which I never knew was a problem just because I didn't buy a reference card (i'm having a shutdown issue which is likely due to an insufficient power supply, but for a while I thought it was overheating so I wanted the vrm temps)!

So anyways I'm not 100% sure but I'm betting it's a 90watter..
 
That is what i wanted to ask. CoH was always a TWIMP game as you see the 4890 bm. Something in 5870 changed and we see the perfomance skyrocket! I wonder what happens with CoH engine. The TMU and ROP was limiting the 4800 series and Nvidia exploited that in games? Can we see/conclude something about AMD/Nvidia architecture...

the increase is, phenomenal.
I play it and it be good to get fps increase.
 
That doesn't make sense. The expected level is 2x4890 :D
Depends. HD5870 was expected to be faster than HD4870X2 and slighlty faster than GTX295. It is in this situation.

Choosing HD4890 as a reference point could be a bit misleading - if you expect theoreticaly large performance difference, more factors can influence resulting performance (CPU, VRAM capacity...).

Radeon X800XT-PE was teoretically almost 3x faster than Radeon 9800PRO. It was more effective. But resulting performance was about only 2x higher at launch.
 
Why should the expected level be 4890x2? The doubled stuff is over 4870. 4890 has higher clocks.

Nope, it doesn't.

Choosing HD4890 as a reference point could be a bit misleading - if you expect theoreticaly large performance difference, more factors can influence resulting performance (CPU, VRAM capacity...).

And those factors don't affect a 4870X2 comparison? A 5870 is almost a perfect doubling of 4890, save for bandwidth. Why use the 4870X2 given its different clocks and crossfire scaling issues as a benchmark?
 
Why use the 4870X2 given its different clocks and crossfire scaling issues as a benchmark?
What are the CrossFire scaling issues? I think it's more a myth than reality today. At least for game sets, which are used for testing. ATi is using AFR now. If a game has correct AFR profile and everything works fine, performance can be better, than on a GPU, which has twice as many functional units. AFR doubles bandwidth, some caches and triangle rate.

There are some situations, where CF doesn't work because of missing profile - but this isn't related to majority of games, which are used for benchmarking.

There are some situations, where CF profile is present, but scaling is poor. It's often caused by CPU limitation, not by CF itself. Single powerful GPU wouldn't perform better in this case.

There are CF issues - synchronization and support for lesser known games. But none of them is related to HD4870X2 performance in reviews.
 
I don't get what you're pushing? It's far simpler to take actual 4890 results and double then than it is to take 4870X2 results with a hundred caveats attached. Why make it so complicated?
 
You can't just double results of HD4890 and compare them to HD5870.

For HD5870 CPU performance isn't doubled, VRAM capacity isn't doubled, bandwidth isn't doubled and triangle rate isn't doubled, too.

Only fillrate, texturing/filtering rate and arithmetic rate is doubled when comparing HD5870 ot HD4890. Interpolation rate is much higher, but at the cost of arithmetic rate.
 
All those things affect the 4870X2 comparison too!! So again, I ask what exactly are you proposing about 4870X2 that makes it a better comparison. ;)
 
Some speculation ...

I decided to use my neurons (yes, I still have some) and I discovered something that nobody seems to have noticed.

According to my discovery, the RV870 "could" have a bus of 384 bit !

Yes, I know, this idea sounds totally absurd...


Like ATI already proved us, they can put a 256 bit bus under the RV670, witch have 193,61mm² of surface. However, as everyone knows it, the size of RV870 is 18,25mm x 18,25mm = 333mm², which is enormous compared with the RV670.

Being equipped with a bus of 256 bit, we would thus be entitled to expect that the RV870 will have approximately (propably more) the same number of pins than the RV670, and will have some generous spacing between each pins.

But this is not the case !

RV870vsRV670.png


Why does the RV870 have 35% more pins ?

RV670 = GDDR3 (69 I/O pins)
RV870 = GDDR5 (62 I/O pins)

The HD 5870 have a 850 MHz frequency, and some customs cards will soon have 900 or 950 MHz frequency.
Then I ask you, what's left for the Radeon HD 5890 ?
Much higher frequency ? Don't think so.

Would it be possible that the RV870 is in fact hiding a weapon of massive destruction ?

Maybe the HD 5890 is in fact equipped with a 384 bit bus, but HD 5850/70 use only 66% of it.
For me, 1737 pins seems enough for a 384 bit bus, but it's only a speculation.

Maybe the pins are used by the EyeFinity.
Maybe it's for a secret SidePort, reserved for the 5870-X2. (Integrated PLX ?)
Or maybe it's only for some power/grounds.

Maybe I'm crazy and see 384 bit bus everywhere. :oops:
 
Aren't the chips a bit too close for a PLX chip? 4870x2 had the 2 gpu's and the plx chip in close proximity. This one is really close too, but hard to say if there is no plx at all.
Fellix's pixture appears to indicate that there is a small chip there between the two GPUs.

Jawed
 
That is what i wanted to ask. CoH was always a TWIMP game as you see the 4890 bm. Something in 5870 changed and we see the perfomance skyrocket! I wonder what happens with CoH engine. The TMU and ROP was limiting the 4800 series and Nvidia exploited that in games? Can we see/conclude something about AMD/Nvidia architecture...
The other obvious difference is the interpolation. If attributes are poorly packed (exported as scalars or vec2s, rather than packed into vec4s) on output from the vertex shader then the interpolation rate on ATI takes a massive hit.

I haven't worked out the numbers, but perhaps on HD5870 the interpolation rate is considerably higher. This would mean that there's no great loss in interpolation rate.

Jawed
 
Back
Top