I have never seen so much disinformation on B3D.
What is happening?
The memory part is right, but "salvage" notsince G200 in GTX260 is a "salvage part" and GTX260 uses potentially less expensive GDDR3 instead of GDDR5 (plus it may be that GTX260 448MB will be a direct competitor of 4870 512 MB, not GTX260 896MB).
There is a consistent 40%+ performance increase in the DX10 benches from the PR slides.well those ATi slides are based on Dx9 versions games, in Dx10 versions their cards still have a performence deficiet..... So A) AMD hasn't worked on its Dx10 drivers much B) Something is hurting Dx10 performance?
It's called rumors and Speculation. but I'd be happy if you just uploaded all the documents you posses with the correct information, that'd make everybody happy.
well those ATi slides are based on Dx9 versions games, in Dx10 versions their cards still have a performence deficiet..... So A) AMD hasn't worked on its Dx10 drivers much B) Something is hurting Dx10 performance?
There is a consistent 40%+ performance increase in the DX10 benches from the PR slides.
Really?
Cause as I see it, DX10 apps get a bigger lead over the DX9 titles, when compared to a 9800GTX that's "supposed to have vastly better DX10 drivers".
We don't know what NVIDIA pays for what.Nv pays 10K per wafer (say). If they get 100 working chips that makes 100$ per chip regardless of speed/quality bins.
You're right about "in time" but i have my doubts about "competetive"...What will be good is if A/MD/Ti finally gets something competative in time
I was wondering...Really?
Cause as I see it, DX10 apps get a bigger lead over the DX9 titles, when compared to a 9800GTX that's "supposed to have vastly better DX10 drivers".
Is that consistent & with the latest Catalyst?Doesn't really happen in real world, they must have picked a certain parts of the games where it does better only one exeception is COJ, which well even the HD3870 had an advantage there.
There was a semi-official confirmation by bit-tech that D12U will be the first Nvidia gpu to adopt GDDR5 in late 2009.Wouldn't a GT200 card with GDDR5 push the bandwidth to somewhere over 200 GB/sec ?
I wonder if the 55nm GT200b will support GDDR5, or if GT200 will require a further redesign.
Can't wait for R700 / 4870X2.
4850 will be a bit more expensive than 8800GT, but significantly faster. 4870 will be a bit more expensive than 9800GTX but significantly faster. That means ATI is actually competing with Nvidia on price/performance for the first time since R580. Why wouldn't you be optimistic about that? The unexpectedly low price of the GTX260 definitely suggests that Nvidia is taking ATI seriously this time round (even though there's no competition at the high end).You're right about "in time" but i have my doubts about "competetive"...
I don't see what changed since before we knew about 800 SPs.
4850 is still going to compete with G92/b.
4870 is still somewhere in between G92/b and GTX260.
800 SPs with (relatively) fast DP math will make a little revolution on the GPGPU market but on the general "games" market they won't make much difference then earlier presumed 480 SPs.
And R700 turns out to be another (crappy) AFR-card with GDDR5 doubling so even if it will be faster then GTX280 in 3DMark it'll cost more to produce and it'll be slower in like 90% of new DX10-games.
And then comes "G200b" and GDDR5 support from NV.
We'll see how it all plays out this time but I really don't see any reason to be overly optimistic about RV770.
There was a semi-official confirmation by bit-tech that D12U will be the first Nvidia gpu to adopt GDDR5 in late 2009.
Yes, it's prized against 8800GT(and it's replacement). Might be bit more expencive at start though.I was wondering...
Is 4850 supposed to cost $200 at the launch?
Because that's the cost of _8800GT_ in this comparsion...
Based on some rumors:The memory part is right, but "salvage" not
wafers are not seeded with "this will be 280, this will be 260".
Nv pays 10K per wafer (say). If they get 100 working chips that makes 100$ per chip regardless of speed/quality bins.
Wouldn't a GT200 card with GDDR5 push the bandwidth to somewhere over 200 GB/sec ?
I wonder if the 55nm GT200b will support GDDR5, or if GT200 will require a further redesign.
Can't wait for R700 / 4870X2.
Is that consistent & with the latest Catalyst?
Obviously I said "if they get 100" as its a nice round number and we don't know for sure how many good/bad chips they get. 40% is the minimum they need in order to return some profit - so maybe GT2xx was late 6 months because they tried to increase yield, with 50-60% we'll be looking at 160-200$ per chip, funny how few pages earlier some people claimed such pricesBased on some rumors:
Die size: 24mm x 24mm -> 576mm²
Wafer: 300mm
Dies/wafer: 89 ~ 92
Yields: ~40%
Good dies/wafer: ~36
GTX 260 = ~27 dies/wafer
GTX 280 = ~9 dies/wafer
And then comes "G200b" and GDDR5 support from NV.