NVIDIA: Beyond G80...

If you think Websites are upto date...a IHV doesnt buy RAM by looking at whats on the Company website. The website can be MONTHS off, and they usually are.
 
If you think Websites are upto date...a IHV doesnt buy RAM by looking at whats on the Company website. The website can be MONTHS off, and they usually are.

Samsung started GDDR4 production nearly a year ago, and so far there's still no sign of it in any Nvidia product.
 
Samsung started GDDR4 production nearly a year ago, and so far there's still no sign of it in any Nvidia product.
And? when was NVDA ever forward thinking in new tech...5800 stung them hard, IBM took them for a ride, they just fix and improve there GPUs and use the best "speed to cost" ram thats around. Its there MO since after the NV1. The odd new BUS is the most radical thang they have done for 4 years...
 
And? when was NVDA ever forward thinking in new tech...5800 stung them hard, IBM took them for a ride, they just fix and improve there GPUs and use the best "speed to cost" ram thats around. Its there MO since after the NV1. The odd new BUS is the most radical thang they have done for 4 years...

X1950 XTX GDDR4 ?
It wasn't exactly a top of the line card for long, and didn't add anything other than speedier RAM.

Also, i sense a little unreasonable hostility towards Nvidia coming from you.
So, you think they have only innovated in the odd memory bus configuration of the Geforce 8800's these past few years, right ?
What about SLI, SM 3.0, Purevideo, the various Nforce's, Soundstorm, DX10/Unified Shading, Dual-channel memory, Hypertransport, Self-contained SLI, etc, etc ?
Don't they count too (and yes, even though i'm still angry with them because of multiple driver issues in Vista, i'm not ready to dismiss everything they do/did only because of that) ?
 
Not to be too off topic... Just because a NVDA fallows the API, Uses tech they bought (3dfx) Has a feature that DOESNT work at all for there Highend at release, Lic. other Companys tech, that doesnt make them innovative. And my comment was for there GPU/ Card biz. Not to say they are just average also in the chipset too. But NVDA hasnt had to be all that at cutting edge whatever, they just do what they do and do it well. As for GDDR4 they have no need at all for that now. And when GDDR4 is out for a year on AMD, they will start to use it at a much lower price.
PS can you figure out what my comments where?
PSS. if i seem hostile towards NVDA, its not the silicone and PCB that gets me, that stuff is great. Its that other part of them that should get you all hostile... Digg understands.
 
Not to be too off topic... Just because a NVDA fallows the API, Uses tech they bought (3dfx) Has a feature that DOESNT work at all for there Highend at release, Lic. other Companys tech, that doesnt make them innovative. And my comment was for there GPU/ Card biz. Not to say they are just average also in the chipset too. But NVDA hasnt had to be all that at cutting edge whatever, they just do what they do and do it well. As for GDDR4 they have no need at all for that now. And when GDDR4 is out for a year on AMD, they will start to use it at a much lower price.
PS can you figure out what my comments where?
PSS. if i seem hostile towards NVDA, its not the silicone and PCB that gets me, that stuff is great. Its that other part of them that should get you all hostile... Digg understands.

Big corporations are all alike, Nvidia, Intel and AMD/ATI are no different of each other.
If Nvidia has been riding on acquired tech and great marketing all these years, kudos to them and shame on the "superior" competitors who didn't take advantage of such limitations.
Quite the contrary. Prices could be much lower by now, instead of climbing above 650 dollar a pop for the newest high-end SKU.
 
Thats a 30% increase in memory clocks/bandwidth from the GTX. By increasing the memory does the GTX gain any benefits more or so then increasing the core?

Im guessing the shader clock is clocked at 1500mhz. (if you follow the trend of core clock x2 +150mhz) note that it only applies to G80

768mb? or 1.5gb?
 
Thats a 30% increase in memory clocks/bandwidth from the GTX. By increasing the memory does the GTX gain any benefits more or so then increasing the core?

Im guessing the shader clock is clocked at 1500mhz. (if you follow the trend of core clock x2 +150mhz) note that it only applies to G80

768mb? or 1.5gb?

768MB, probably.
1.5GB is, not only completely unnecessary right now or in the near future, but also extremely expensive.
 
Thats a 30% increase in memory clocks/bandwidth from the GTX. By increasing the memory does the GTX gain any benefits more or so then increasing the core?

Im guessing the shader clock is clocked at 1500mhz. (if you follow the trend of core clock x2 +150mhz) note that it only applies to G80

768mb? or 1.5gb?

Anything above x8 AA and x16 AF at least for FEAR the g80 gets a close to a 1 to 1 increase with memory overclock, anything less the core clocks give a 1 to 1 increase. Now with games that push fillrates and shaders more AKA next gen games, core clocks will be more important. This is what I've seen with a couple of next gen titles and engines.
 
Back
Top