AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Simply making something faster, costs be damned, is not terribly difficult or impressive.

While I see your point, I disagree somewhat. AMD is selling their card at close to $400 at this point in time. The fact they could be selling it for less, does not change how much current consumers are paying or what it's current performance scores / levels are. Given the lack of competition, if I needed to buy a top of the line card today, I would pick the 5870.

Similarly, I will judge the G300, at launch, based on it's price to me, the consumer, and it's performance level and how that compares to its competition at that time. I will repeat this process whenever there is a price drop or a new card introduced or whenever I'm thinking about buying or recommending a video card.

Ever since the 3870, there has been more and more talk about the underlying silicon costs, yields and potential profits (versus actual board costs to the consumer), like it's a win for the consumer. I don't see it that way. It's like Exxon Mobil boasting that they have the highest profit margins for a large cap company. Bully for EM, not so awesome for me filling up at the pump.

At the end of the day, if Nvidia launches a price / performance competitive card, all rumblings about the the non-sustainability of their price models, the burden on their OEMs, etc. etc. while ticking the part of my brain that likes to know more about the inner workings of the industry, has no part in my purchase decision at that given time. Now, if it were to actually transform into a retail price cut that is not matched by the other OEM, then I will weigh it accordingly.

Frankly, I do not expect the silicon size vs. perf/mm disparity to be any larger at Nvidia's next launch than it was at the last set of launches, and yet somehow Nvidia has kept the price war on an even keel (arguably). They know how much of premium they can charge, and it is based on performance, not silicon size, is what I would argue. If they are taking a bigger hit on profit margins to bring me that level of performance at that price versus ATI, do I really care when its time to buy a card? Heck no.

What I find impressive about a card at launch day is its technical capabilities and performance levels. While I can appreciate elegance and efficiency, I am no more moved by a average-performing part at a reasonable price as I am by the latest version of the Toyota Corolla. It's the high end, high performance segment that I find exciting, and it's max performance that I want to see.

I will say that I don't like to see cards that run too hot or too noisy, but given the improvements from each company I don't see one of them significantly leading the field in this aspect for more than a half-generation.
 
Last edited by a moderator:
Meh ... LN.

Unless you want to spend 50K dollar on a cooler and pay for an extra 15 KW on your electricity bill it's not really relevant.
You're thinking of a big triple stage cascade or autocascade cooler, though even they are no more than 1/3 of those figured. Liquid Nitrogen and associated cooling pots are actually quite cheap in comparison, less than $500CAD total in Canada. It's just not very practical, because the LN only lats so long. But this is getting OT...

Now has anyone thought of using the GPU to power a CAVE? With 6 display outputs, it's perfect.
 
While I see your point, I disagree somewhat. AMD is selling their card at close to $400 at this point in time. The fact they could be selling it for less, does not change how much current consumers are paying or what it's current performance scores / levels are. Given the lack of competition, if I needed to buy a top of the line card today, I would pick the 5870.

Similarly, I will judge the G300, at launch, based on it's price to me, the consumer, and it's performance level and how that compares to its competition at that time. I will repeat this process whenever there is a price drop or a new card introduced or whenever I'm thinking about buying or recommending a video card.

Ever since the 3870, there has been more and more talk about the underlying silicon costs, yields and potential profits (versus actual board costs to the consumer), like it's a win for the consumer. I don't see it that way. It's like Exxon Mobil boasting that they have the highest profit margins for a large cap company. Bully for EM, not so awesome for me filling up at the pump.

At the end of the day, if Nvidia launches a price / performance competitive card, all rumblings about the the non-sustainability of their price models, the burden on their OEMs, etc. etc. while ticking the part of my brain that likes to know more about the inner workings of the industry, has no part in my purchase decision at that given time. Now, if it were to actually transform into a retail price cut that is not matched by the other OEM, then I will weigh it accordingly.

Frankly, I do not expect the silicon size vs. perf/mm disparity to be any larger at Nvidia's next launch than it was at the last set of launches, and yet somehow Nvidia has kept the price war on an even keel (arguably). They know how much of premium they can charge, and it is based on performance, not silicon size, is what I would argue. If they are taking a bigger hit on profit margins to bring me that level of performance at that price versus ATI, do I really care when its time to buy a card? Heck no.

What I find impressive about a card at launch day is its technical capabilities and performance levels. While I can appreciate elegance and efficiency, I am no more moved by a average-performing part at a reasonable price as I am by the latest version of the Toyota Corolla. It's the high end, high performance segment that I find exciting, and it's max performance that I want to see.

I will say that I don't like to see cards that run too hot or too noisy, but given the improvements from each company I don't see one of them significantly leading the field in this aspect for more than a half-generation.
While you are making very valid points I think you missed Jason's underlying message; Cypress is just like the G92 and will probably hit the $200 mark faster than the G300.
 
While you are making very valid points I think you missed Jason's underlying message; Cypress is just like the G92 and will probably hit the $200 mark faster than the G300.

Well, that I can definitely agree with - I would be surprised if it didn't hit $200 faster than G300. It is ATI top of the line part, and while I can expect a future 5890 to hold the top of the line prices up, I'm not sure that the 5870 will drop in price much faster than the 4870 did, unless Nvidia releases an astonishingly faster and more efficient card than everyone is expecting.
 
Now has anyone thought of using the GPU to power a CAVE? With 6 display outputs, it's perfect.

For a CAVE, you want synchronised output as you usually have one computer per wall. On the other hand, you can already drive several walls from a single computer using Quadro cards. Maybe if ATI releases a FireGL version of it with sync'ed output (i.e. on-card synchronised and multi-card synchronisation), but otherwise, I don't see anyone running their CAVE using a single consumer card.
 
While I see your point, I disagree somewhat. AMD is selling their card at close to $400 at this point in time. The fact they could be selling it for less, does not change how much current consumers are paying or what it's current performance scores / levels are. Given the lack of competition, if I needed to buy a top of the line card today, I would pick the 5870.

Similarly, I will judge the G300, at launch, based on it's price to me, the consumer, and it's performance level and how that compares to its competition at that time. I will repeat this process whenever there is a price drop or a new card introduced or whenever I'm thinking about buying or recommending a video card.

As you should. My point isn't that it's ssoooo great that AMD seems to have a really healthy margin built into the 58xx cards. My point is that if the GT300 products are a good chunk faster - so what? It only really matters if they're a good chunk faster and *just as affordable*. Or at the very least, more expensive in proportion to the performance boost. The size and cost of the 58xx cards seems to indicate that, over the coming months, they have a lot of room to really come down the cost curve if they want to.

Maybe they will, maybe they won't. But the potential is there. If G300 is >450mm2 and the yields aren't awesome, the potential for it to be in cards a really large number of people can afford is diminished greatly.

The volume of graphics cards sold over the $300 level is fairly small. The real volume is way below $200, even. (hell, these days, the real volume is in *notebooks*).

You want developers to step it up with PC games and to hop on the DX11 bandwagon (and REALLY make use of it, not the more subtle stuff we've seen so far), you better hope someone - NV or AMD or even Intel - really provides a dramatic improvement in performance and DX11 functionality in a sub-$200 card.

If someone tells me G300 is going to be faster than RV870, that doesn't really mean a whole lot to me unless you tell me it's also going to cost the same.
 
Maybe if ATI releases a FireGL version of it with sync'ed output (i.e. on-card synchronised and multi-card synchronisation), but otherwise, I don't see anyone running their CAVE using a single consumer card.
What about 4 of their Eyefinity cards in quadfire? 24 outputs might just do it.
 
The volume of graphics cards sold over the $300 level is fairly small. The real volume is way below $200, even. (hell, these days, the real volume is in *notebooks*).

You want developers to step it up with PC games and to hop on the DX11 bandwagon (and REALLY make use of it, not the more subtle stuff we've seen so far), you better hope someone - NV or AMD or even Intel - really provides a dramatic improvement in performance and DX11 functionality in a sub-$200 card.

Well, I think the 5850 has the potential to be this generation's 8800GTS/GT (as :in nearly high-end performance and plenty of power to really make use of the new API, at a mainstream price). Unless ofcourse nVidia comes up with even better value in the $200-$300 bracket.
 
What about 4 of their Eyefinity cards in quadfire? 24 outputs might just do it.

The problem is basically to v-sync them all, you don't want to display frame 55 on the right wall while frame 56 pops up on the middle wall. The number of outputs is not a problem at all for current CAVEs, as you have one full computer per wall (so each GPU is driving one wall.) And even if not, sticking several Quadros into one computer to get the required number of display outputs is hardly a problem if you can afford a CAVE.

Edit: I don't get your point, for CAVEs, you can already run all displays from one PC using multiple Quadros, or you can drive all displays using multiple PCs with a Quadro each, and price doesn't matter to much in that segment anyway, so what would be the reason to stick in a HD 5k series card? Only argument I can see is you need less HD cards (you could hook up all 6 walls on a single card), but what for? To save money/power? As I said, the costs of the GPUs are a rather small part compared to installation/air conditioning/beamers etc.
 
Last edited by a moderator:
The real issue isn't how GT300 measures up to 5870 in perf/$. The real issue is if the castrated GTX 360 beats 5870 in perf/$, as these are the cards that will be at equal enough prices to compare. Last time too, 4870 came so close to GTX 260 that many people said at 4870 launch that GTX260 was DOA. And that was the real cause of the price crash that followed.

And here's the catch, GT300 needs to sooo fast that it's salvage cousin will beat cypress. Merely GT300 beating cypress won't be enough by some distance.
 
Another catch- GT300 needs to be here pronto, before ATI even thinks of a redone 950Mhz Cypress with souped up hi-volt 1500Mhz GDDR5. They can actually kill margins for a lower volume part I guess. :p

And of course the sleezy Palit/TUL costdown boards. 5850 for 200USD is a likely target I suspect after its RRP drops to 230/240 with Juniper XT reference boards going for 150+ (200 seems a stretch, but without competition who knows).
 
AnandTech reported that Juniper = 14 SIMD.

Cypress = 20 SIMD
Juniper = 14 SIMD ? (-6 SIMD : 1120 ALU)
Redwood = 8 SIMD ? (-6 SIMD : 640 ALU = 2xRV730 = RV740)
Cedar = 2 SIMD ? (-6 SIMD : 160 ALU = 2xRV710)

Juniper (RV860)

SIMD : 10
Clocks : 850, 1200
Memory : 1GB 128-bit GDDR5
 
Hmm, if that IS the case, Anand would have been talking about the 5830, probs at 200-220USD.

And Juniper is a 150USD card at best.
 
If people still insist on using RVxxx names while AMD abandoned them, Juniper should be RV840, not 860
 
Back
Top