AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
CPU cycles are so much longer. Their turn around will come when bulldozer hits. I hope...
Offtepic, but the cycles aren't so much longer. Instead, AMD has cancelled so many products (two, if I remember correctly).
Intel implemented QPI, integrated mem controller, PCIe controller and (soon) graphics into CPU. All while AMD is selling same old K10/10.5.
The sad part is that the relative lack of competition lets Intel hold back features.
The scary part is that if AMD can't make Bulldozer fly... But enough of OT :oops:

Edit: Anand has a piece about Eyefinity.
 
Last edited by a moderator:
Unremoved :D
Click here -> Screenshot! :cool:
Hi rez 6-LCD -> LINK



EyeFinity -> Anandtech

Driving all of this is AMD's next-generation GPU, which will be announced later this month. I didn't leave out any letters, there's a single GPU driving all of these panels. The actual resolution being rendered at is 7680 x 3200; WoW got over 80 fps with the details maxed. This is the successor to the RV770. We can't talk specs but at today's AMD press conference two details are public: over 2 billion transistors and over 2 TFLOPs of performance. As expected, but nice to know regardless.
AMD's software makes the displays appear as one.
This will work in Vista, Windows 7 as well as Linux.
I played Dirt 2, a DX11 title at 7680 x 3200 and saw definitely playable frame rates.
I played Left 4 Dead and the experience was much better.
 
Last edited by a moderator:
Should we dub that psolord's rule? Selling a product for what people are willing to pay for it is not "blood-sucking". Stop being so cheap :p

I agree totally.. geesh what is it with people today ?? We really need to go back to the good old days of nV getting premium dollars ($999.00 MSRP for n800 GTX Ultra 512MB) where companies could expect higher premiums for +10% performance
 
Come on, don't get mad!
What i meant was that Nvidia has to change strategy: they cannot use anymore the strategies they used for G80 and GT200 (high-end parts months before the mass market parts). But that's my opinion, you can have a different one.

Well they had pretty good mainstream parts in the 7600gt/8600gts time that keep on coming just few month behind top performer. The other thing is that people spoiled with 7900gt-7600gt relation expected lot more from next gen mainstream card something like g94-9600gt and not 3x slower card compared to that time g80-Ultra performance.

g94 came out really late and in fact represent real mainstream for g80-g92 and g200 series cause nothing better came out from envidia in that section .... until recently announced gt(s)240. Seven quarters behind g94 and almost 1.5 year after first g200 flavored cards. And it should represent mainstream card for gt200 also? So after g80 nV go back on gf3/4->gfmx relations between high end top performance and real (budget) mainstream. Even it's never really a budget when it came out from nVidia's factory :rolleyes:

Assuming 512alu * 2(fma) *1.5Ghz = 1.5Tflop, which is less than what 5850 (assunimg 825 Mhz clock for it too) delivers.

hd5850 will never be such close to 825Mhz (w/o OC :p). If the claims are true w/ hd5870 @850 MHz == 2.72 TFlops and they also claim somewhat lower hd5850 (1440SPs) @xxxMHz @700Mhz it gives 2.02TFlops (and someone said it will be 700MHz). Anyway Ati should have a huge OC potential and let's hope it will not suck 190W rather 150-160W and that for $299 retailers won make cheap pwm that can stand only that 150W and some extremely cheap cooler solution just to keep ROI sky high on good selling cheaper product. 1GB card with even more robust cooling solution shouldn cost more than $120 to produce somwhere around 4870 512MB in that OEM price chart. So i really don't see real xplain why would wasted product like hd5850 cost more than $249 and fully fledged hd5870-1gb $299. while 2gb xtxtx should have a premium $399 price :LOL:

On the other hand ATi really went wild on nVidia if anything near 80TMUs at stunning 850MHz are true. If ever that gt300 @600mm2 (and beyond) came out and add some moderate OC potential at 40nm and looking at their best gt200b @585MHz on 55nm --- If it had memory & tmu clock @650MHz it would need 105TMUs just to equalize w/ RV870 raw TMU performance, but considering gt300 speculated size (600mm2) some 120TMUs @600MHz looks like more realistic. And again it's 50% more TMUs than gt200 has but on 3x smaller process node. Still 50% more wasted die space for TMUs are not so cheap to produce chips that perform just somewhere ATi offering -RV870, with much smaller die.

So i simply don't believe that nVidia in their gt300 won't radically change something. Maybe ditching different SPs clocks and TMU ... too radical imo, or maybe pumping SPs even more than 2.4x as they're today (3x-4x pumped) and shrinking die to 350-400mm2 and only 80TMUs again but @850MHz ... oooh now nV fanboys cant bragging how their solution has enormous texturing advantage over cheap poorman's ATi offering. If they simply go with bigger is better politics as they did w/ gt200, they'll need 240TMUs at least too justify the chip and card price and give nV fanboys some false advantage again :LOL:. It's not that ATi now hasn't some good solution, but they're setting price bar high that nV could simply beat it. And nV would bankrupt if they'll sell even at same price as ATi cause even lower yields than gt200 (if it even possible to produce 600mm2 monster @40nm).

My bet is that gt300 is smaller than 350mm2 class of chip and they ditching all that g70 advantage except separate SPs-TMU/memory clocks. Simply it must be radically new architecture cause anything beyond 400mm2 would result in poor yields and w/ same performance level w/ ATi cards that will already dominating on the market for 3-6 month they have lose-lose situation.

Maybe that's what Dominate means from someone sig. And maybe AMD will eat up them if the legislative allow them after they payout their bonds in two and half years :devilish:

If you look at the GT300 rumors, that die is small compared to the rumored 500+ mm^2
Sorry but what's with that size. I never mention it just that they need extremely to raise their TMU numbers just too keep pace w/ ATi compared it w/ their moderate TMU-MEM clock speed. As I explained above and some more.
 
Last edited by a moderator:
20090910203831_eye3.jpg


:oops:
 
Although i don't believe that these sites have 100% accurate info regarding the specs,
let's assume for a moment that the infos are accurate.

So we have 32ROPs/80TUs/1600SPs at 850 or 825MHz core clock and 153,6GB/s or 166,4GB/s for 5870.

(Although the ABT does not say if it is 16ROPs with twice Z rate etc... or 32ROPs)


The card is too long...

The chip doesn't seem to be more than 420mm2 like BSN was saying... (in fact it seems much smaller than that figure)

So why ATI needs such a long card?

One reason could be that maybe ATI is using more chips.

I mean, maybe ATI seperated the display controllers from the main chip and they using seperate chips like what G80 had with the NVIO's chips.

Also, If the ABT figures are correct (330mm2) then this is a possible scenario, becauce for these specs (same architecture logic as RV7X0) DX11 330mm2 is lower than it should be.

But then, I don't like the die size ratio with the lower part... (330mm2 / 181mm2?)

Anyway, if we have something like these specs implying (850MHz 32ROPs/80TUs/1600SPs) then my estimation is that the 5870 will be in the worst case -25% of what Nvidia can possibly do with a GT300 (my GT300 estimation...)

About the Eyefinity, i don't understand why they promoting this feature with games and not with BD playback.

When i first heard the news about 3 simultaneous displays (single GPU model) i assumed that they want to promote something like:

dual simultaneous HDMI outputs (with dual audio/video streams and dual HDCP) + the regular DVI.

So at that time i thought more about what are the posibilities regarding the dedicate (UVD 3) HD part of the chip...
 
I agree totally.. geesh what is it with people today ?? We really need to go back to the good old days of nV getting premium dollars ($999.00 MSRP for n800 GTX Ultra 512MB) where companies could expect higher premiums for +10% performance

What we really need is to get back to the good old days of AMD turning a decent operating profit. Sainthood only gets you so far.
 
Although i don't believe that these sites have 100% accurate info regarding the specs,
let's assume for a moment that the infos are accurate.

So we have 32ROPs/80TUs/1600SPs at 850 or 825MHz core clock and 153,6GB/s or 166,4GB/s for 5870.

(Although the ABT does not say if it is 16ROPs with twice Z rate etc... or 32ROPs)


The card is too long...

The chip doesn't seem to be more than 420mm2 like BSN was saying... (in fact it seems much smaller than that figure)

So why ATI needs such a long card?

One reason could be that maybe ATI is using more chips.

I mean, maybe ATI seperated the display controllers from the main chip and they using seperate chips like what G80 had with the NVIO's chips.

Also, If the ABT figures are correct (330mm2) then this is a possible scenario, becauce for these specs (same architecture logic as RV7X0) DX11 330mm2 is lower than it should be.

But then, I don't like the die size ratio with the lower part... (330mm2 / 181mm2?)

Anyway, if we have something like these specs implying (850MHz 32ROPs/80TUs/1600SPs) then my estimation is that the 5870 will be in the worst case -25% of what Nvidia can possibly do with a GT300 (my GT300 estimation...)

About the Eyefinity, i don't understand why they promoting this feature with games and not with BD playback.

When i first heard the news about 3 simultaneous displays (single GPU model) i assumed that they want to promote something like:

dual simultaneous HDMI outputs (with dual audio/video streams and dual HDCP) + the regular DVI.

So at that time i thought more about what are the posibilities regarding the dedicate (UVD 3) HD part of the chip...

I very much doubt that. All the display engines are on the same chip or it wouldn't have made sense if it started as a notebook technology / request from AIBs.
 
Also, If the ABT figures are correct (330mm2) then this is a possible scenario, becauce for these specs (same architecture logic as RV7X0) DX11 330mm2 is lower than it should be.

Is it? Someone counted that you could double RV770's SPs, TU's and RBE's, take everything else as it is, and still have ~570 million transistors "spare" to make it DX11 if the RV870 was 820mm^2
 
I very much doubt that. All the display engines are on the same chip or it wouldn't have made sense if it started as a notebook technology / request from AIBs.

Now that i read the anandtech article, with what he is saying, yes it doesn't make sense (but for other reasons...) ,
but anyway the 181mm2 & 330mm2 figures are a little bit lower than it should for those DX11 specs that sites are implying, since they are using the same architectural logic for TU/SPs...

And anyway, the 5870 is GPU for AIBs sector not notebook sector (i mean maybe for the the notebooks parts intergrated the display controllers, remember that the 8800GTX had NVIOs and the 8600GTS had not if i remember correctly)

I just wrote what i thought about what ABT and BSN was saying about specs and eyefinity...

I hate predicting GPUs... (too hard...)
 
Driving all of this is AMD's next-generation GPU, which will be announced later this month. I didn't leave out any letters, there's a single GPU driving all of these panels. The actual resolution being rendered at is 7680 x 3200; WoW got over 80 fps with the details maxed. This is the successor to the RV770. We can't talk specs but at today's AMD press conference two details are public: over 2 billion transistors and over 2 TFLOPs of performance. As expected, but nice to know regardless.

Whaaa? 24 megapixels at 80fps?

I agree totally.. geesh what is it with people today ?? We really need to go back to the good old days of nV getting premium dollars ($999.00 MSRP for n800 GTX Ultra 512MB) where companies could expect higher premiums for +10% performance

Think something is too expensive? Don't buy it. Anybody who thinks HD 5870 at the currently rumoured specs is overpriced at $399 is being extremely greedy.
 

Dirt 2. Dx11. Playable framerates at 7680x3200. :oops:

If this is a first gen Dx11 part and it can already do first gen Dx11 at 7680x3200 at playable framerates. Uh... Dang...

Now the only thing left that would make life perfect is a single 24"-30" with a native 7680x3200 res. My life would be complete.

Interesting and Samsung is going to make a line of Eyefinity multi-monitor displays?

Regards,
SB
 
Last edited by a moderator:
Is it? Someone counted that you could double RV770's SPs, TU's and RBE's, take everything else as it is, and still have ~570 million transistors "spare" to make it DX11 if the RV870 was 820mm^2

Sorry, i don't understand what you are trying to say.

What do you mean with:

if the RV870 was 820mm^2
 
I mean, maybe ATI seperated the display controllers from the main chip and they using seperate chips like what G80 had with the NVIO's chips.

Never. I'd never bet on that cause ever since rv100 they pursuit maximal integration, and since r500 series we didnt even see simplest VIVO solutions that were there in the past. Not to mention fully fledged TV-tuner. It a part of past for ATi. Even there was some announcement of hd3850 based Wonder card afair.

If it's really 190W it's nothing new. HD2900XT w/ 160-170W consumption had same solution, just like cheaply made 3870x2 (~190-200W) or top end 4870x2. It's really needed for 24/7 stream applications more than for gaming. But still 190W seems much more from 150W hd4870 and 300-320W HD4870x2 solutions. On the other hand 100W (33%) less power consumption for 20% more performance from last 4870x2 shouldnt be overseen.
 
I have to say I like the idea of a fillrate scaling graph that goes up to ~25MP. Pity no-one does them any more.

Jawed
 
Whaaa? 24 megapixels at 80fps?



Think something is too expensive? Don't buy it. Anybody who thinks HD 5870 at the currently rumoured specs is overpriced at $399 is being extremely greedy.

I didn't say (nor imply) that the $399 price tag was too expensive, in fact I think AMD/ATI could easily charge $499-599 (1GB/2GB) given the supposed performance % over the 4870X2/GTX295 variants.
 
Back
Top