AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Probably because it would require another backend flow. Another set of masks. Another package design. another bring up. Another set of DFM tests. Its not like, OOh we have this, stamp stamp. Ohh now we have something that is 2x this.

Then there is the additional inventory, demand forecasting, etc.

Trouble being that the same "native dual core" rumors came from chiphell as the rumor that on "RV870" there's a supposed $10 additional package cost.

Comparing additive Sq mm isn't accurate. 4x 180 is likely cheaper to produce and test than 1x600.

If they're 4 individual chips yes; oh and there's no 600sqmm chip from the competing side.

If they can be performance competitive with 4x180 then they likely come out ahead. The can also likely get better power performance out of 4x180 than 1x600 due to inter-die variation.

I believe the latter when I see it; today a 4870X2 isn't consuming less than a GTX280 despite the first being 2*263mm2@55nm vs. 583mm2@65nm nor do I see the GTX295 which consists of 2*520mm2@55nm consume far more than a 4870X2 either. In fact if I can trust hardware.fr's results the latter runs a tad cooler overall:

http://www.behardware.com/articles/747-1/report-graphics-cards-and-thermal-characteristics.html

Needless to say that I personally expect a far more sophisticated power/frequency management on NV's next generation compared to GT200/200b.
 
ATI has been "betting" on faster RAM for quite some time now. I would not think that this is unlikely.
Higher speed RAM comes with a price premium and thus isn't the ideal candidate for speeding up sub-100-dollar SKUs - IMHO.
 
Higher speed RAM comes with a price premium and thus isn't the ideal candidate for speeding up sub-100-dollar SKUs - IMHO.
Supposedly RV740 was "specified-priced" for launch in late 2008, when GDDR5 would have cost more... On the other hand, GDDR5 was "late". But this new GPU is theoretically ~9 months+ after RV740.

I dunno how to account for GDDR5 pricing, particularly now that Qimonda has disappeared. But RV740 is apparently using the cheapest possible GDDR5. As the 40nm process matures, the price of the GPU decreases allowng GDDR5 to take a bigger bite.

When we start seeing 1GB GDDR5 on cards below $100 I guess we know GDDR5 pricing has arrived.

Soon we'll have >100GB/s for <$100 :cool:

Jawed
 
no-x,
No, they don't consume them, but they need to be fed with data from the same pool as the rest. And if the rest's already sucking up every last bit...

--
And now for something completely different:
The Huddy speaks again: LDS in DX11 Compute can speed up HD-SSAO by a factor of up to 3:1 compared to the already quite fast 10.1-path:
http://www.youtube.com/watch?v=KqLVmiFO2LA
 
Depends on ALU:TEX requirements. If newer games use higher ALU:TEX ratio, hypotetical 800SPs GPU could perform better even without additional bandwidth requirements.
 
Could the AMD influence have somehow helped ATI engineers pull off a small ~2GHZ chip :?:

That would allow ATI to have a small chip with competitive specs without having to go all MCM or whatever.

ATI haven't shown any indications of managing to get anywhere near those kinds of clocks so far but NV hadn't either prior to G80 :devilish:
 
Could the AMD influence have somehow helped ATI engineers pull off a small ~2GHZ chip :?:

That would allow ATI to have a small chip with competitive specs without having to go all MCM or whatever.

ATI haven't shown any indications of managing to get anywhere near those kinds of clocks so far but NV hadn't either prior to G80 :devilish:

2Ghz is something like twice as much as the achieve maximum today. So whilst perhaps the clocks have been going up due to AMDs CPU experience, I don't really forsee it going higher than 1.2Ghz at a very big stretch in the near future.

Theres a difference between core clocks and shader clocks, and AMD/Nvidia have different shader pipelines, which doesn't mean that one is neccesserily better than the other because one clocks higher.
 
Anyway, a perhaps separate question is whether having two dice in a single board is really advantageous in any fashion (programming, performance, etc.) over using two different cards.

The physical proximity allows for certain potential benefits, though the current implementations of Crossfire-on-a-stick leave all or most of them on the table.

If the sideport in RV770 were actually used, there would have been an additional link between the chips with some extra bandwidth not available to a two-card solution. AMD stated that there was some benefit, but that the driver team improved the software so much that it was no longer as compelling to force board partners to actually use the side port. (I can't find the links, but I think the biggest benefit was some improved minimum frame rates. Peak benchmark numbers didn't go up all that much so...)
The PLX bridge chip in theory allows for slightly faster communication between the GPUs. There might be some advantage if it allows for full 16x communication independent of the motherboard the card sits in.

If GPUs were architected differently, there could have been savings in the amount of DRAM needed, less CPU overhead, and maybe the GPUs could have appeared to the system at large to be one single GPU, avoiding the drama of required profiles, Catalyst AI, any number of other examples of GPU laxity.

The crossfire-on-a-stick model gives all the headaches of multi-card solutions and driver shenanigans, though it does possibly save a PCIe slot, depending on the cooling solution and motherboard layout.
I suppose that's an advantage, I guess.
 
How large gain you see by going compute shader depends on your ALU:TEX ratio. A compute shader implementation will likely require somewhat more ALU operation than a pixel shader implementation, whereas the TEX operations drop significantly. ATI chip that have plenty of ALU and relatively few TEX naturally sees a large boost in performance. For Nvidia who has less ALU power and more TEX power the performance boost will be smaller.
 
Trouble being that the same "native dual core" rumors came from chiphell as the rumor that on "RV870" there's a supposed $10 additional package cost.

Another tape-out probably costs in the range of 60 mil or so all wrapped together. Add in addition inefficiencies etc into the bigger design and they probably need on order of 15-20 mil volume to break even with the bigger chip.


If they're 4 individual chips yes; oh and there's no 600sqmm chip from the competing side.

I though GT300 was expected to be in the 500-600 size range.
 
Another tape-out probably costs in the range of 60 mil or so all wrapped together. Add in addition inefficiencies etc into the bigger design and they probably need on order of 15-20 mil volume to break even with the bigger chip.

60M? Do you know of a good source for information on these costs?
 
Another tape-out probably costs in the range of 60 mil or so all wrapped together. Add in addition inefficiencies etc into the bigger design and they probably need on order of 15-20 mil volume to break even with the bigger chip.




I though GT300 was expected to be in the 500-600 size range.

Surely you mean that a tape out costs $6M? That'd be reasonable, plus an easy $1-10M for any extra engineering and validation effort.

DK
 
Back
Top