Extrapolation of GFFX series naming scheme and performance.

Grall

Invisible Member
Legend
By using known information from past history, we can deduce the successor to GF FX will actually be two different chips, one being the GF VX for consumers and GF HX geared more towards the pro market. HX line can only use DDR-EDO2 (but has parity/ECC support) and handles more RAM, while the FX can only cache 64MB of memory (because Nvidia feels customers won't have need for more anyway). While VX has slightly more raw bandwidth, HX memory latency is superior and performance overall is higher. Successor to those chips will be the GF TX, another consumer-oriented chipset with an improved DDR2-SDRAM memory controller, but still that annoying 64MB caching limitation.

A total core redesign follows; the GF GX, NX and LX respectively. GX is initially slammed for horrendous 16-bit performance (because Nvidia expected Microsoft to have eliminated such old garbage from their OSes, but that's of course too much to ask for). NX is aimed at professionals again, and only finds widespread use in graphics servers. Perhaps finding two-letter combinations ending in X confusing, this scheme is abandoned for a simpler numbering system instead.

The n820 is the first Nvidia chip to use Rambus memory and is massively hyped to wipe the floor with everything because it has tremendous bandwidth, unfortunately the launch of the product is marred by constant delays and implementation problems leading to instability (one so severe it involves a product recall before launch in which tens of thousands of boards have to be destroyed and a delay of several more months). nVidia keeps the hype going by pointing out Rambus low pin-count means fewer layers in the mobo and says competitors' board designs are expensive while hiding the fact they themselves have to use just as many layers due to rambus memory's high signalling rate.

More trouble for nV, most of the rambus memory's huge bandwidth is wasted because the chip simply cannot utilize all of it, and more yet is lost due to high latency on page switch requests. Rambus memory also turns out to be several hundred percent more expensive than conventional memory, prompting nVidia to come up with a stop-gap solution in the shape of a memory translation chip. This chip turns out to be yet another engineering failure that is not only even slower, it can lead to possible data corruption and system crashes too, so all boards using the chip are summarily recalled and destroyed at great expense and replaced with the older rambus version. The n820 is quietly allowed to slip into oblivion, preferably to never be mentioned again...


Of course, my crystal ball spied a lot more, but it's getting late over here, so this will have to do for now. :)

*G*
 
this is all based on the fact that nvidia screwed up on the FX.

So using this base of thought im guessing that ATi will produce a card running on MRAM for memory and a DNA based processor that can handle large numbers of threads at a time?
 
Chris123234 said:
this is all based on the fact that nvidia screwed up on the FX.

So using this base of thought im guessing that ATi will produce a card running on MRAM for memory and a DNA based processor that can handle large numbers of threads at a time?

Captain!
The deuterium crystals have gone into warp-reflux and shattered the humor-core!
I dinna know if i can fix it!
 
Well, I think he is drawing a not-so-subtle Intel/nVidia parallel, which has been done before (thought not with as much effort :p ).
 
Back
Top