G
Guest
Guest
.13 micron for the R300 is no rumour that is in the making and will be a refresh for the current .15 also the 9700 is DDR II ready so they can plug that on the card.......
DemoCoder said:Either
#1 NVidia is going to be throughly beaten this round
#2 NVidia has a 256-bit external bus
#3 NVidia has some other exotic solution (deferred tiling, embedded dram, etc)
#2 seems to be the most conservative, predictable, approach if you were designing a new chip and wanted the highest probability of success.
If NV30 will beat hands down R300 on 'older' software then you will know why Nvidia has made that choice.LeStoffer said:Why didn't they make the full jump like ATI and use that silicon estate to make the shaders even more powerful to handle both DX8 and DX9 very well?
nAo said:If NV30 will beat hands down R300 on 'older' software then you will know why Nvidia has made that choice.
ciao,
Marco
duncan36 said:They managed to convince everyone to buy a GF SDR and dump their TnT2 Ultra to get this mythical T&L which was going to revolutionize gaming down the road.
Of course this was total marketing.
If the Nv30 isnt much faster,or even slower, than the R300. I expect to see a blitz of information about features the Nv30 has that will revolutionize gaming.
LeStoffer said:I guess that we can all agree that there's is just no way in hell they would design a chip with 8 pixel pipelines with a 128-bit external bus and a mildly refined LMA II-architecture.
GF SDR was still noticeably faster than the TNT2 Ultra, so T&L wasn't the only selling point
duncan36 said:GF SDR was still noticeably faster than the TNT2 Ultra, so T&L wasn't the only selling point
Actually you're incorrect. Any advantages would in no way justify someone spending the $250+ they were charging for the Geforce SDR when they'd just popped $200 on a TnT2Ultra.
duncan36 said:GF SDR was still noticeably faster than the TNT2 Ultra, so T&L wasn't the only selling point
Actually you're incorrect. Any advantages would in no way justify someone spending the $250+ they were charging for the Geforce SDR when they'd just popped $200 on a TnT2Ultra.
On an Athlon 650:
---------------------
TnT2Ultra Q3 Demo001-1024x768@32bit- 43fps
GeforceSDR Q3 Demo001-1024x768@32bit- 41.6fps
Before you say its CPU limited
TnT2Ultra Q3 Demo001-1280x1024@32bit- 24.6fps
GeforceSDR Q3 Demo001- 1280x1024@32bit- 23.7fps
As you can see the GeforceSDR was totally not worth the money, but because of the T&L hype people bought the card.
Either you weren't surfing the hardware boards at the time or you have a selective memory. Because the PR push for T&L, no lets call it propaganda push by Nvidia was massive.
This was at the time that the Nvidia stock was sky-rocketing, so the boards were infested with morons pushing the 'party line' claiming that T&L on the GeforceSDR was the second coming.
If Nvidia trys to push features over speed this time because the Nv30 underperforms, I think they'll have a harder time of it, once bitten twice shy as the saying goes.
Obviously the benchmarks for a supposed new technology in the Geforce SDR were awful, and as I've shown the TnT2 Ultra often outpaced it.
Ok, there's a few games were the TNT 2 Ultra is about as fast as the GF256 SDR. Maybe even 2-3% faster. But, there's also a lot of games where the GF256 SDR is 50+ % faster.
So, how can you draw the conclusion that the cards are equal from these bencmarks ?
Isn't that obvious?
He just chose the benchmarks where the cards were perfoming pretty much the same to prove his point, which is plain wrong