NVIDIA GF100 & Friends speculation

And that's my point. with the original GF100 being expected about the same time as Cypress, driver development surely hasn't lagged for all that time.

And with nV's driver team always seen as outperforming AMD's, who's to say that they weren't able to do just as much work over the past few months.
This is, after all a very expensive and very important product, you don't want to ruin that launch because the drivers are teh suxx, assuming they've been sleeping and had little time to work on drivers is imho a wrong PoV.

Of course driver development has lagged, parts based on the new architecture have yet to come to market. You don't optimize the driver for characteristics of a low-clocking A1/A2 spin.
 
Could this kind a connection between two chips happens in reasonnable time?
intelisscc-4.jpg


(it's from that article)
 
And that's my point. with the original GF100 being expected about the same time as Cypress, driver development surely hasn't lagged for all that time.
Except that driver development is slow and inefficient until final hardware is available in volume.
 
Except that driver development is slow and inefficient until final hardware is available in volume.
sure, the driver team sits idle for 6 months after first test silicon is back from the foundry, and only really starts working once the reviewers and consumers have seen 80% of their games crash, stutter, glitch, under-perform or not even start ;)
 
To satisfy my own curiosity. This is based on Nvidia's numbers. The average excludes Vantage for obvious reasons.

480bench.png


Alternate view using 285 as baseline:

480bench2.png
 
Overall, I don't think ATI should be disappointed with these results. The 5970 is still top dog, the 5870 still beats the more expensive to produce GTX 470, and the rest of the market has little competition from NVidia.

Of course driver development has lagged, parts based on the new architecture have yet to come to market. You don't optimize the driver for characteristics of a low-clocking A1/A2 spin.
Clocks don't matter for driver optimization. Memory timings can have a small effect, but that's it.

With the exception of software optimizations like memory allocation, shader compilers, or alteration of the workload, drivers have made fairly minimal changes in performance for the last couple years.
 
The turnaround time for testing is vastly larger until they have a large number of chips to test.

How many would they need to be able to test properly? 10? 50? 100?

How long have they had at least some reasonable number of A1/A2 samples available?

Would it matter much for development if they only ran at oh say 495 Mhz?
 
The turnaround time for testing is vastly larger until they have a large number of chips to test.
Not really.

And in fact the most development and performance gains that happen over a chips lifespan happen in the during bringup. Comparatively, for generic gains, its slow after that.
 
sure, the driver team sits idle for 6 months after first test silicon is back from the foundry, and only really starts working once the reviewers and consumers have seen 80% of their games crash, stutter, glitch, under-perform or not even start ;)
Test silicon is not equal to final silicon, however. Yes, work can be done much more rapidly than before they have any silicon, but if the test silicon were really ready, they would have moved straight to production.
 
Overall, I don't think ATI should be disappointed with these results. The 5970 is still top dog, the 5870 still beats the more expensive to produce GTX 470, and the rest of the market has little competition from NVidia.
Yes, but I still see a big, yawning gap between 5770 and 5850. I think AMD blew a big hole bw in their lineup. This may just provide nv with the opening they desperately need. Juniper with a 192 bit bus would have been a much better match for gf104. That gap is way too big and 5830 is pretty lame.
 
Yes, but I still see a big, yawning gap between 5770 and 5850. I think AMD blew a big hole bw in their lineup. This may just provide nv with the opening they desperately need. Juniper with a 192 bit bus would have been a much better match for gf104. That gap is way too big and 5830 is pretty lame.
The 5830 is only lame because of the price, and the price is only high because there's no competition from NVidia. The performance fits between the 5770 and the 5850 quite nicely, though maybe it would have been better with one more SIMD enabled. GF104 is a long way off, and may wind up being as big as Cypress.

If NVidia can sell 448-bit cards with 450 mm2 of silicon at the same price as RV770, then ATI will have no problem selling Cypress boards under $200.

EDIT: Typo, meant $200, not $100
 
Last edited by a moderator:
The 5830 is only lame because of the price, and the price is only high because there's no competition from NVidia. The performance fits between the 5770 and the 5850 quite nicely, though maybe it would have been better with one more SIMDs enabled. GF104 is a long way off, and may wind up being as big as Cypress.

If NVidia can sell 448-bit cards with 450 mm2 of silicon at the same price as RV770, then ATI will have no problem selling Cypress boards under $100.

The situation might be acceptable from amd's pov, but it is definitely an opportunity missed. I wonder if they'll go with 192 bit bus on Juniper's replacement.
 
"Zero development" was never mentioned, and is an absurd assumption. It's simple common sense that a new card with a brand new architecture stands to gain more performance from the evolution of drivers than one that has been out for 6 months and is itself a revision of an existing architecture.

and this brings up the long debated issue of what exactly determines whether or not something is "new" architecturally .. according to what ? PR and Marketing slides ?
 
The situation might be acceptable from amd's pov, but it is definitely an opportunity missed. I wonder if they'll go with 192 bit bus on Juniper's replacement.

I very very much doubt that.. when has ATI ever gone with non^2 memory bus ? pretty much always been 32/64/128/256/512.. I just don't see ATI jumping on that bandwagon this late.
 
It already happened...

3.2GHz HT3.1 32bit-link can provide 51.2GB/s=409.6Gb/s bandwidth....
Oops looks like the b/B confusion hit me again...
at least with french octets it can't happen
.
Still I've a question, couldn't Intel solution be easier and cheaper to produce than hyper transport links? Or power efficiency is the only strong point of the tech?
 
Back
Top