AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
1. Not having an aggressive strategy for professional markets might save resources, yet it also reduces the chances to raise the company's market share in those markets by a significant rate.

2. CUDA can co-exist with any other open standard API as Intel's Ct will. For how long only time and the according demand can tell. Nothing in that regard is not necessarily better.
 
Lol what? In what way is G92b "old" compared to GT200? Are you suggesting GT200 should be sold for $99?

You had stated that G92/94 were not relevant in this discussion I was just pointing out that it is incredibly relevant when determining how well ATI has executed. And furthur pointing out that AMD's execution with Rv770 has been FAR better than Nvidia's Execution with GT200.

Why the artificial separation between G9x and GT200? They're both Nvidia products aimed at different price levels. Or is selling G9x "unfair" in some way now? In any case, all this doesn't answer the question of why G92 is relevant for the upcoming battle.

I'm not the one that inserted the artificial seperation. You did when you stated

Not really sure what G92/G94 have to do with anything. Are you implying that future Nvidia products will inevitably be less competitive than G92/G94?

G92/94 has everything to do with it. And yes. GT200 so far is FAR less competitive than G92/94. Even compared to past Nvidia performance.

Even Geforce FX did relatively better versus Geforce 4 compared to how well GT200 is doing with regards to G92.

And other than that blip what other Geforce new generation or refresh chip had sold worse in comparison to the chip it was replacing?

Did G92 sell worse than G80 after it was launched? How about G80 sellling worse than G71 when it launched? Did G71 sell worse than G70 after it launched? Etc...

From a sales standpoint Nvidia with GT200 so far has executed no better than AMD did with R600. From a quality standpoint however, there's no argument that GT200 is far better.

In both cases the newer high end card was outsold by the older high end card.

Regards,
SB
 
AMD's strategy is to not waste money building a GPU almost nobody buys. Instead use that money to refresh all products on a more frequent and more timely basis. And perhaps use that money in other places, too.

Yes, while I'm sure AMD would love the performance crown, it just doesn't work for that. In order to consistently compete for the performance crown you must sink more R&D into a bigger/costlier chip. Meaning you then have to sell significantly more at a higher price in order to receive the same ROI and margins.

By targetting a lower more volume oriented segment you can reduce R&D a bit, target a size that is not as large and therefore cheaper to manufacture. In return you don't have to sell your product for as high a price to see a good ROI and good margins.

AMD is approaching this purely from a business standpoint. Which I'm sure is pretty abhorrent to most enthusiasts, as what are they thinking not targetting the enthusiast space? :)

But so far it's working. Will it continue to work for them if Nvidia adopts a similar approach is hard to tell. Likewise will they still be able to keep their more mainstream oriented chips within spitting distance of Nvidia's Enthusiast targetted chips? That's going to be a telling one.

If they can't keep it at least within spitting distance, performance wise (ex - 4870 versus GTX 280), then it might falter.

Regards,
SB
 
You had stated that G92/94 were not relevant in this discussion I was just pointing out that it is incredibly relevant when determining how well ATI has executed. And furthur pointing out that AMD's execution with Rv770 has been FAR better than Nvidia's Execution with GT200.

I think you might've gotten crossed up a bit. Sound Card brought up G92/G94 in context of AMD's prospects for next generation competition. I didnt bring it up and it didnt have anything to do with the discussion of AMD's past execution.

From a sales standpoint Nvidia with GT200 so far has executed no better than AMD did with R600.

Based on what numbers?

If they can't keep it at least within spitting distance, performance wise (ex - 4870 versus GTX 280), then it might falter.

Exactly, that's the bottom line.
 
By targetting a lower more volume oriented segment you can reduce R&D a bit, target a size that is not as large and therefore cheaper to manufacture. In return you don't have to sell your product for as high a price to see a good ROI and good margins.

AMD is approaching this purely from a business standpoint. Which I'm sure is pretty abhorrent to most enthusiasts, as what are they thinking not targetting the enthusiast space? :)

:)

Just a small comment - being interested in a field or in technology is not the same as compulsively buying the latest and greatest.
That kind of enthusiast is simply a niched shopaholic.
Stationary PCs, nevermind their internal accessories, are loosing their lustre for tech shoppers anyway so targetting that category doesn't seem like a recipy for success.
 
Last edited by a moderator:
Yes, while I'm sure AMD would love the performance crown, it just doesn't work for that. In order to consistently compete for the performance crown you must sink more R&D into a bigger/costlier chip. Meaning you then have to sell significantly more at a higher price in order to receive the same ROI and margins.
I think the development of the new architecture is a far higher investment than scaling the number of memory interfaces/SIMDs/ROPs/etc. Trying to do it in parallel for multiple configurations would add costs, but a "simple" choice of which market segment to introduce it in is about strategy and not about development cost IMO.

The slower consumerization of new architectures is something you pull in a monopoly or an oligopoly with either tacit or explicit deals (illegal, but when has that ever stopped people?) to take it slow ... when ATI slipped behind they did the natural thing to claw back market share, abandon the (probably tacit) agreement and go straight for the higher volume market allowing them to have a much easier time competing.

The only problem is that if you go for fast consumerization you directly diminish the margins in the higher end ... making it far less interesting to actually develop the lower volume high end parts, especially with stretched resources.
 
I think you might've gotten crossed up a bit. Sound Card brought up G92/G94 in context of AMD's prospects for next generation competition. I didnt bring it up and it didnt have anything to do with the discussion of AMD's past execution..

I bring up G92 as an example. Their is IMHO, a probable mess going to be left behind with the transition from GT200 to GT300, speaking in terms of G92 becoming just about completely irrelevant at this point. Perhaps giving "RV870pro" a good bit more room to "breath fire".

Exactly, that's the bottom line.

Inversely, if NV can't keep the price with in spitting distance, it's going to falter.
 
Why don't you think RV870 will have a lesser degree of effect this next go around? :p
Oh, come on, it's easy.
DX11 vs DX11 instead of DX10.1 vs DX10.
40nm vs 40nm instead of 55nm vs 65nm.
GDDR5 vs GDDR5 instead of GDDR5 vs GDDR3.
That's enough to be sure that this time AMD won't be as much ahead of NV as it was last time.
 
Oh, come on, it's easy.
DX11 vs DX11 instead of DX10.1 vs DX10.
40nm vs 40nm instead of 55nm vs 65nm.
GDDR5 vs GDDR5 instead of GDDR5 vs GDDR3.
That's enough to be sure that this time AMD won't be as much ahead of NV as it was last time.
Not if they ship their next gen GPU(s) 6 months ahead of NVIDIA (or vice versa..)
 
Now, maybe Nvidia has all the OEM contract for discrete cards locked down ( or whatever you would call cards contained in prebuilt systems), as far as that side goes.

Actually, it seems that AMD/ATi has done some good business there.
A few weeks ago, I had to order a new PC at work. It was *impossible* to get a PC with a GeForce from Dell.
They do not sell ANY models with GeForce cards currently. They are all either Radeons, or if you go into the workstation class, they use Quadro's. But regular GeForces? Nope.
So it smells like ATi has made a good deal with Dell. And Dell is big, you'll notice that in your sales.

So I had to cheat, since our company officially only supports nVidia hardware, and that's what I need to develop on. We decided on ordering a machine with a cheapo Quadro, and ordering a separate 9800GTX+, and put it in ourselves.
 
Fishing for D3D11

Had a quick fish in the ILAssembler.dll that comes as part of Stream Kernel Analyzer and found some D3D11-related tokens:

Code:
srv_struct_load   srv_raw_load      dcl_struct_srv    dcl_raw_srv      append_buf_consume      append_buf_alloc
There's also atomics for UAV:
Code:
uav_read_cmp_xchg uav_read_xchg     uav_read_xor      uav_read_or      uav_read_and      uav_read_umax     uav_read_umin     uav_read_max      uav_read_min      uav_read_rsub     uav_read_sub      uav_read_add      uav_cmp     uav_xor     uav_or      uav_and     uav_umax    uav_umin      uav_max     uav_min     uav_rsub    uav_sub     uav_add      uav_struct_store  uav_raw_store     uav_store   uav_struct_load      uav_raw_load      uav_load
and atomics for LDS:
Code:
lds_read_cmp_xchg lds_read_xchg     lds_read_xor      lds_read_or      lds_read_and      lds_read_umax     lds_read_umin     lds_read_max      lds_read_min      lds_read_rsub     lds_read_sub      lds_read_add      lds_cmp     lds_xor     lds_or      lds_and     lds_umax    lds_umin      lds_max     lds_min     lds_rsub    lds_sub     lds_add      lds_store_vec     lds_load_vec      lds_store   lds_load
So I guess LDS is continuing. No sign of GDS-specific stuff, though.

Jawed
 
Oh, come on, it's easy.
DX11 vs DX11 instead of DX10.1 vs DX10.

Did this even matter? I will say it will matter bit more when DX11 hits.

40nm vs 40nm instead of 55nm vs 65nm.

Process node is irrelevant. In the end, it's about the die size. Unless you think GT300 is a little guy it self. GT200 has been 55nm longer than it has been 65nm.

GDDR5 vs GDDR5 instead of GDDR5 vs GDDR3.

Again irrelevant. Unless, once again you think GT300 is in the same vain as RV770. GT200 has more bandwidth then RV770 and GT300 is 512bit. GDDR5 on it just means it's going to be even more expensive.
 
Actually, it seems that AMD/ATi has done some good business there.
A few weeks ago, I had to order a new PC at work. It was *impossible* to get a PC with a GeForce from Dell.
They do not sell ANY models with GeForce cards currently. They are all either Radeons, or if you go into the workstation class, they use Quadro's. But regular GeForces? Nope.
So it smells like ATi has made a good deal with Dell. And Dell is big, you'll notice that in your sales.

So I had to cheat, since our company officially only supports nVidia hardware, and that's what I need to develop on. We decided on ordering a machine with a cheapo Quadro, and ordering a separate 9800GTX+, and put it in ourselves.

So you don't test for compatibiliy on other hardware than NVIDIA? Yikes!

That would be like forgetting to optimize for the worst browser out there, Internet Explorer...
 
So you don't test for compatibiliy on other hardware than NVIDIA? Yikes!

That would be like forgetting to optimize for the worst browser out there, Internet Explorer...

Yes, but our clients don't run anything but nVidia hardware. This saves us a lot of time and money.
This isn't regular 'off-the-shelf' commercial software anyway. We know who our clients are.
So normally our software will never run on anything but nVidia hardware anyway. And if it ever does, people just aren't entitled to support.
Having said that, one of our developers does have a laptop with a Radeon chip, and we've not encountered any problems so far.
 
Again irrelevant. Unless, once again you think GT300 is in the same vain as RV770. GT200 has more bandwidth then RV770 and GT300 is 512bit. GDDR5 on it just means it's going to be even more expensive.
But instead of there being ~20% difference in available bandwidth, we could be looking at 100% difference.

X2 may well peak at ~100% performance but that's really stretching it. AMD has to > double performance over RV770 if NVidia merely doubles GT200 performance in GT300. It seems pretty unlikely that RV870 is going to even approach 2xRV770 performance.

If GTX360 is 80% of GTX380 performance for $400, that prices RV870 at no more than $250. Maybe NVidia will go for $450+, with GTX380 at $650.

Jawed
 
NV should probably make a true workstation-only chip this time with GT300 series, if they are to keep up with the high-end professional market domination, and not to bin an expensive ASIC as an overpriced/overengineered gamer SKU, like GTX280. Then a low grade variant of the architecture should go for a kind of sweet-spot targeting on the consumer space, to rival the flagship RV8xx part.
 
Is GT200 overengineered? I think it's only enhanced G80 with modified TF:TA ratio and some DP units... I'd say it's reather oversized, not overengineered.
 
Back
Top