AMD: R7xx Speculation

Status
Not open for further replies.
I have never seen so much disinformation on B3D.

What is happening?

It's called rumors and Speculation. but I'd be happy if you just uploaded all the documents you posses with the correct information, that'd make everybody happy.
 
well those ATi slides are based on Dx9 versions games, in Dx10 versions their cards still have a performence deficiet..... So A) AMD hasn't worked on its Dx10 drivers much B) Something is hurting Dx10 performance?
 
since G200 in GTX260 is a "salvage part" and GTX260 uses potentially less expensive GDDR3 instead of GDDR5 (plus it may be that GTX260 448MB will be a direct competitor of 4870 512 MB, not GTX260 896MB).
The memory part is right, but "salvage" not
wafers are not seeded with "this will be 280, this will be 260".
Nv pays 10K per wafer (say). If they get 100 working chips that makes 100$ per chip regardless of speed/quality bins.

In the end we'll never know for sure - until maybe next quarter financial results and what prices/availability etc, etc. What will be good is if A/MD/Ti finally gets something competative in time ;)
 
well those ATi slides are based on Dx9 versions games, in Dx10 versions their cards still have a performence deficiet..... So A) AMD hasn't worked on its Dx10 drivers much B) Something is hurting Dx10 performance?
There is a consistent 40%+ performance increase in the DX10 benches from the PR slides.

It's called rumors and Speculation. but I'd be happy if you just uploaded all the documents you posses with the correct information, that'd make everybody happy.
:LOL:
 
well those ATi slides are based on Dx9 versions games, in Dx10 versions their cards still have a performence deficiet..... So A) AMD hasn't worked on its Dx10 drivers much B) Something is hurting Dx10 performance?

Really?

f5dcf522-c4c4-4b62-8497-7a0f15ad3be6.jpg


Cause as I see it, DX10 apps get a bigger lead over the DX9 titles, when compared to a 9800GTX that's "supposed to have vastly better DX10 drivers".
 
There is a consistent 40%+ performance increase in the DX10 benches from the PR slides.


:LOL:

Really?

f5dcf522-c4c4-4b62-8497-7a0f15ad3be6.jpg


Cause as I see it, DX10 apps get a bigger lead over the DX9 titles, when compared to a 9800GTX that's "supposed to have vastly better DX10 drivers".

Doesn't really happen in real world, they must have picked a certain parts of the games where it does better only one exeception is COJ, which well even the HD3870 had an advantage there.
 
Nv pays 10K per wafer (say). If they get 100 working chips that makes 100$ per chip regardless of speed/quality bins.
We don't know what NVIDIA pays for what.

What will be good is if A/MD/Ti finally gets something competative in time ;)
You're right about "in time" but i have my doubts about "competetive"...
I don't see what changed since before we knew about 800 SPs.
4850 is still going to compete with G92/b.
4870 is still somewhere in between G92/b and GTX260.
800 SPs with (relatively) fast DP math will make a little revolution on the GPGPU market but on the general "games" market they won't make much difference then earlier presumed 480 SPs.
And R700 turns out to be another (crappy) AFR-card with GDDR5 doubling so even if it will be faster then GTX280 in 3DMark it'll cost more to produce and it'll be slower in like 90% of new DX10-games.
And then comes "G200b" and GDDR5 support from NV.
We'll see how it all plays out this time but I really don't see any reason to be overly optimistic about RV770.
 
Really?

f5dcf522-c4c4-4b62-8497-7a0f15ad3be6.jpg


Cause as I see it, DX10 apps get a bigger lead over the DX9 titles, when compared to a 9800GTX that's "supposed to have vastly better DX10 drivers".
I was wondering...
Is 4850 supposed to cost $200 at the launch?
Because that's the cost of _8800GT_ in this comparsion...
 
Doesn't really happen in real world, they must have picked a certain parts of the games where it does better only one exeception is COJ, which well even the HD3870 had an advantage there.
Is that consistent & with the latest Catalyst?
 
Wouldn't a GT200 card with GDDR5 push the bandwidth to somewhere over 200 GB/sec ?

I wonder if the 55nm GT200b will support GDDR5, or if GT200 will require a further redesign.


Can't wait for R700 / 4870X2.
 
Wouldn't a GT200 card with GDDR5 push the bandwidth to somewhere over 200 GB/sec ?

I wonder if the 55nm GT200b will support GDDR5, or if GT200 will require a further redesign.


Can't wait for R700 / 4870X2.
There was a semi-official confirmation by bit-tech that D12U will be the first Nvidia gpu to adopt GDDR5 in late 2009.
 
You're right about "in time" but i have my doubts about "competetive"...
I don't see what changed since before we knew about 800 SPs.
4850 is still going to compete with G92/b.
4870 is still somewhere in between G92/b and GTX260.
800 SPs with (relatively) fast DP math will make a little revolution on the GPGPU market but on the general "games" market they won't make much difference then earlier presumed 480 SPs.
And R700 turns out to be another (crappy) AFR-card with GDDR5 doubling so even if it will be faster then GTX280 in 3DMark it'll cost more to produce and it'll be slower in like 90% of new DX10-games.
And then comes "G200b" and GDDR5 support from NV.
We'll see how it all plays out this time but I really don't see any reason to be overly optimistic about RV770.
4850 will be a bit more expensive than 8800GT, but significantly faster. 4870 will be a bit more expensive than 9800GTX but significantly faster. That means ATI is actually competing with Nvidia on price/performance for the first time since R580. Why wouldn't you be optimistic about that? The unexpectedly low price of the GTX260 definitely suggests that Nvidia is taking ATI seriously this time round (even though there's no competition at the high end).

R700 is still an unknown quantity at this point. If it does actually have a shared memory architecture, it may do a much better job of AFR than any previous product. If not... well, in games where AFR works well, it should quite significantly outperform GTX280 for about the same money. How many games that is remains to be seen.
 
There was a semi-official confirmation by bit-tech that D12U will be the first Nvidia gpu to adopt GDDR5 in late 2009.


Hmmm, that sounds like a true next-gen DX11 / Shader Model 5 GPU with a totally different architecture than G80/G92 and GT200. Something that'll go up against R8xx and Larrabee.
 
The memory part is right, but "salvage" not
wafers are not seeded with "this will be 280, this will be 260".
Nv pays 10K per wafer (say). If they get 100 working chips that makes 100$ per chip regardless of speed/quality bins.
Based on some rumors:

Die size: 24mm x 24mm -> 576mm²
Wafer: 300mm
Dies/wafer: 89 ~ 92
Yields: ~40%
Good dies/wafer: ~36

GTX 260 = ~27 dies/wafer
GTX 280 = ~9 dies/wafer
 
Wouldn't a GT200 card with GDDR5 push the bandwidth to somewhere over 200 GB/sec ?

I wonder if the 55nm GT200b will support GDDR5, or if GT200 will require a further redesign.


Can't wait for R700 / 4870X2.

GT200b will support GDDR5 and it is coming sooner than you think.
 
Is that consistent & with the latest Catalyst?


not at all, at certian parts of the game yes, but those are all unplayable settings on most of those games on both cards! We already know when the g92's hit a bandwidth/memory bottleneck they take a nose dive, but why use benchmarks that aren't even playable on a HD4950 too? They are pretty much pointless other then quake, doom and farcry.

A fair comparision (price excluded at this point) would be against the 9800 gtx, where bandwidth is similiar. The price of the gtx will be lowered in due time, as will the 9800 gt which will undercut the 4950. nV does have room to play here. AMD is only taking around 25-30% on margins for their new chips, nV has been taking around 35-40% for thier old chips. So they do have some space for a price war. It all depends on how fast nV ramps up the g92b though, I think its a straight shrink but they can clock these chips higher, since the clocking room for the g92 is very good only limited by voltage.
 
Based on some rumors:

Die size: 24mm x 24mm -> 576mm²
Wafer: 300mm
Dies/wafer: 89 ~ 92
Yields: ~40%
Good dies/wafer: ~36

GTX 260 = ~27 dies/wafer
GTX 280 = ~9 dies/wafer
Obviously I said "if they get 100" as its a nice round number and we don't know for sure how many good/bad chips they get. 40% is the minimum they need in order to return some profit - so maybe GT2xx was late 6 months because they tried to increase yield, with 50-60% we'll be looking at 160-200$ per chip, funny how few pages earlier some people claimed such prices :D
 
Status
Not open for further replies.
Back
Top