NVIDIA G92 : Pre-review bits and pieces

Where are the $100 cards on this list? Afterall, it is the 8600 GTS which the 3850 competes against.
2600/8600 series have hit the $100 level, except for the GTS. I dont see a 256MB 8800GT hitting that level any time soon.

Nvidia has to have the 256MB 8800GT out and in huge quantities so that the price levels drop to what they promised initially (>$200). Until then the 3850's main competitor will be the 8600GTS.
 
However, it's worth pointing out that a 192-bit PCB ought to be cheaper than a 256-bit one, and that 6 full-density memory chips are *afaik* slightly cheaper than 8 half-density chips (i.e. 384MiB vs 256MiB). So even if it wasn't justified in terms of yields, you'd still have savings elsewhere. Ideally, you wouldn't want too much demand for such a SKU though, I guess.
nvidia doesn't save money on the PCB, so those savings don't improve G92 margins (assuming G92 is the chip used).

-FUDie
 
nvidia doesn't save money on the PCB, so those savings don't improve G92 margins (assuming G92 is the chip used)
Uhm, of course it does. It tends to be a good idea not to claim such things before thinking it through...

A given price segment has a target bill of materials which includes the chip, the memory, the PCB, and so on. If you can make your reference design's PCB and memory cost less for a given price segment (i.e. without having to reduce the price of your final product) then you can ask more for your chip to compensate. Any money that doesn't go to Samsung/Hynix/Foxconn/Flextronics/etc. is money that goes to NVIDIA or ATI.
 
I'm not responsible for how others interpret what I say, damnit! ;) However, I've pretty much got the existence of a 192-bit SKU confirmed, fwiw, I'm just not 300% sure it's G92-based yet.

You know, I thought I posted something about this in relation to your previous post:

Actually my guess is it's NOT a blended average, and that's why the claimed margins are so high.

If you only 60% of the chips on a wafer could work as 8800 GTs, and that 80% could work either as 8800 GTs or as something else with more redundancy... Then one simple way to calculate margins is to consider 80% yields for *all* chip sales. This results in higher margins for the 8800 GT, and lower margins for lower-end SKUs

I wasn't initially sure you meant more reduntant blocks ie a 6C/192bit G92 ASIC or more built-in redundancy ie 8C/512bit.

However, it's worth pointing out that a 192-bit PCB ought to be cheaper than a 256-bit one, and that 6 full-density memory chips are *afaik* slightly cheaper than 8 half-density chips (i.e. 384MiB vs 256MiB).

I also wrote something like a 6x512Mbit SKU would be cost competitive against an 8x256Mbit SKU, esp given notional PCB level savings too.
 
Uhm, of course it does. It tends to be a good idea not to claim such things before thinking it through...

A given price segment has a target bill of materials which includes the chip, the memory, the PCB, and so on. If you can make your reference design's PCB and memory cost less for a given price segment (i.e. without having to reduce the price of your final product) then you can ask more for your chip to compensate.
For the PCB, there should be transparent price pass through from eg Flextronics to eg XFX. Bundling skews the price advantage toward the IHVs, so NV/AMD can capture a price wedge there too. The AIBs/OEMs, etc, don't like that much...

Edit:
Any money that doesn't go to Samsung/Hynix/Foxconn/Flextronics/etc. is money that goes to NVIDIA or ATI.
Gee, don't tell that to the AIB, OEM, re-marketer, reseller, retailer chain... ;)
 
Last edited by a moderator:
The AIBs/OEMs, etc, don't like that much...
Tough spot that. Most of their customers probably aren't likely to get better pricing on their own, but those who might are the ones likely to order most. At the same time you want to enable your board partners to make money, yet you don't want your component specs to be lax enough so that that a single sub-par manifestation from one AIB (looking to save a few bucks) can blemish your brand name and impact the business of your other partners.

Off topic:
I have a feeling AMD might adopt a pragmatic stance in this regard this time around, giving some concessions/flexibility to their board partners with the 3850 and/or the 3870.
 
Most of their customers probably aren't likely to get better pricing on their own,...
Yep. That's the IHV's stated goal. Volume component purchase & bundling enables price savings & guards against SKU sub-component supply shortages/volatility. The Q is, does an AIB with their own production line necessarily gain by outsourcing PCB production/assembly...? Maybe yes, maybe no. I do know they don't like not having the option to endogenize costs...
 
Last edited by a moderator:
I don't think 512MB of 700MHz GDDR3 is too much to expect. Even if Nvidia or ATI thought otherwise, they could've cut back on the core; I'd much rather have a 96sp or 240sp part with 512MB of slower GDDR3 than a blazing fast core that is going to waste because it is choked by only have 256MB to work with.

As a note: partners will be building Radeon HD 3850's with 512MB of memory. They will obviously slot inbetween 3850 256MB and 3870 prices.
 
http://forums.vr-zone.com/showthread.php?t=204269

GeForce 8800GTS 512MB
* G92 core chip
* 128 SP
* Core clock 650 MHz, the SP clock 1625 MHz
* 256 bit memory interface
* GDDR3 memory clock 970 MHz (effective clock 1940 MHz)
* The new version of a black double-slot fan (non-fans of the original GTS)
* Price 299 to 349 dollars

Should be a pretty fast card if those specs are correct.

It will have 32 TMU units and most likely 24 ROP's (if that one set ain't disabled for some reason). In that case this card would have 6,2% higher texel & pixel fillrate compared to 8800 Ultra. ALU-performance would be 8,3% higher than in 8800 Ultra
 
http://forums.vr-zone.com/showthread.php?t=204269



Should be a pretty fast card if those specs are correct.

No question about it, but still...
With Crysis, Bioshock and Call of Duty 4 out, having no true new high-end cards (with corresponding memory bandwidth) during the holiday season seems like a missed opportunity for both Nvidia and AMD.

In the end, i doubt this new "full G92" is much faster in the real world than either the GTX or the Ultra at true high-end resolutions with filters enabled -where it "hurts"-, and those products deserved a proper replacement at their price-points after a full year (no, those SLI and Crossfire on-a-stick solutions don't count in my book ;)).
 
I was struck by this week's CompUSA flyer advertising an 8800GT for $270 and an 8800GTS 640 for $450. Sorry for stating the obvious, but progress is good. :)
 
IIRC G8x/G9x's ROP partitions are each assigned to a 64-bit MC channel, so 256-bit memory interface = 16 ROPs.

I think the 8 extra TMU's and 16 Scalar Processors (when comparing the 8800 GTS 512 to the 8800 GT 512) should be much more important to the overall performance than the amount of ROP's and memory bandwidth.
Of course, the small increases in clockspeeds should give it a nice boost as well.
 
I think the 8 extra TMU's and 16 Scalar Processors (when comparing the 8800 GTS 512 to the 8800 GT 512) should be much more important to the overall performance than the amount of ROP's and memory bandwidth.
Of course, the small increases in clockspeeds should give it a nice boost as well.

Well, from reviews I've read so far the biggest performance gains in most games come from o/c'ing the SPs & RAM, so you're half-right.
 
Humm...:

nvidia_vcard2_550.jpg


nvidia_news_rwbmvcard1_550.jpg


Taken from eVGA's website, apparently.
Is the cat out of the bag, or is it just a typo ? ;)

Source.
 
Ok, looks like someone can actually add their own GPU name in the configuration details fields in that specific page, so i'd call it bogus..., for now. ;)

edit
AnarchX beat me to it. :)
 
2549-nvidia_logo3.jpg


Geforce 9800 will come with product specs :

- 65nm process technology at TSMC
- Over one billion transistors
- Second Generation Unified Shader Architecture
- Double precsion support (FP64)
- GPGPU native
- Over one TeraFLOPS of shader processing power
- MADD+ADD configuration for the shader untis (2+1 FLOPS=3 FLOPS per ALU)
- Fully Scalar design
- 512-bit memory interface
- 1024MB GDDR4 graphics memory
- DirectX 10.1 support
- OpenGL 3.0 Support
- eDRAM die for “FREE 4xAA”
- built in Audio Chip
- built in tesselation unit (in the graphics core)
- Improved AA and AF quality levels
Price from The 9800GTX for 549$-649$ and The 9800GTS for 399$-449$

Someone posted on a forum I frequent. Can't say if it's real though.

US
 
Back
Top