Nvidia GT300 core: Speculation

Status
Not open for further replies.
I read the quote that there's one working and two (nearly) dead chips out of three. It doesn't clarify at all if the 1/3rd of hypothetical operational chips can have all units enabled at all. He only mentions that the faulty ones aren't performance GPU material. Once a chip theoretically works only with say half of its units or even less enabled it hardly sounds like it's worth using such a monstrous die at all.
 
@Psycho:
Right, seems like i missed that part. :(

The sollution is simple: cut the die in half with an axe and voila their yields are automaticall better... (yes it's a joke heh...)

Yeah, and then scale to enthusiast with a duplex configuration, introducing the need for further driver profiling, inherent problems with inter-frame-dependencies etc. While it's a nice solution for shareholders, it's not a nice solution for gamers.
 
Damn, there's a lot of hype and expectations surrounding this latest flagship video card from Nvidia (one crazy rumor/speculation that I've heard that they want to make the highest end model reach up to high as 4 GB of video memory on one single card).

Holy shit (if that were really true).

Anyways, I just hope Nvidia would deliver on this one. :)
 
Damn, there's a lot of hype and expectations surrounding this latest flagship video card from Nvidia (one crazy rumor/speculation that I've heard that they want to make the highest end model reach up to high as 4 GB of video memory on one single card).
I have a hard time believing that would be the case for launch parts. It wouldn't surprise me at all if that was intended for later parts based on this chip, in a year or two, but at launch? I don't think so.

It's more plausible for workstation parts, but still probably won't happen right away.
 
Damn, there's a lot of hype and expectations surrounding this latest flagship video card from Nvidia (one crazy rumor/speculation that I've heard that they want to make the highest end model reach up to high as 4 GB of video memory on one single card).

The chips will definitely support 4GB of memory - the existing ones already do. So that's not exciting one way or another. It would make absolutely zero difference in gaming performance and it would be stupid for that reason. Of course Quadro/Tesla cards will get the full complement of memory as they do today.
 
The chips will definitely support 4GB of memory - the existing ones already do. So that's not exciting one way or another. It would make absolutely zero difference in gaming performance and it would be stupid for that reason. Of course Quadro/Tesla cards will get the full complement of memory as they do today.

I'll have to find the link and post it here later, but the 2GB GTX285 gets about 15fps than the 1GB vs in 1 game that the 2 were compared to in at 1680x1050. So I'm not exactly sure a 4GB card wouldn't show the same kinda result for higher resolutions where graphics memory is a must in large quanities.
 
I'll have to find the link and post it here later, but the 2GB GTX285 gets about 15fps than the 1GB vs in 1 game that the 2 were compared to in at 1680x1050. So I'm not exactly sure a 4GB card wouldn't show the same kinda result for higher resolutions where graphics memory is a must in large quanities.

Yeah I saw those benchmarks too but we're talking about 4GB, not 2GB. a 100% increase is a tad smaller than a 300% increase.
 
I'll have to find the link and post it here later, but the 2GB GTX285 gets about 15fps than the 1GB vs in 1 game that the 2 were compared to in at 1680x1050. So I'm not exactly sure a 4GB card wouldn't show the same kinda result for higher resolutions where graphics memory is a must in large quanities.

Is the 2GB GTX285 gaining X% more performance against the 1GB variant across the board or rather in corner cases? It takes quite some effort to empty in today's games 1GB of onboard ram and it's obviously twice as hard to reach the 2GB mark unless it's some really dumb scenario where you can turn for instance texture compression off.

I wouldn't be surprised if they release later on a 4GB variant, but the added cost for twice the ram over a 2GB variant won't ensure the same increase in overall performance overall. Needless to say that if they mainstream GPUs will have up to 4GB ram I wouldn't be surprised seeing twice as much on future high end Quadros; but there the added investment has noticable returns nonetheless.
 
^^^
Hehe (but as they say, build it and they will come hehe)

Still crazy when you think about it: If one unit of this Nvidia Flagship Geforce video card at 4 GB would already pack a mouthful. Crazy already......

But if insanity would be breached, then you put four of these things (via Nvidia Quad SLI method), then HOLY FUCKING SHIT hahahahaha.

All this bleeding edge stuff just to make sure that your Crysis experience would be way better than your previous Nvidia video card..........

Max graphics settings, max resolution....yet still does the job. That's really crazy stuff...

:D
 
If FarCry would be an example to judge Crysis performance on future platforms, let's just say that I recall playing FC on a 9700PRO with mediocre settings in 1024*768 with some AA/AF. For real high resolutions with more AA/AF samples and every setting absolutely maxed out it needed something in the 8800GTX league (and of course one has to encount the CPU/platform differences between those two timeframes too). Anyway with that in mind it shouldn't be too hard to calculate how long it took to come from one end to the other or how many GPU generations exactly.
 
If FarCry would be an example to judge Crysis performance on future platforms, let's just say that I recall playing FC on a 9700PRO with mediocre settings in 1024*768 with some AA/AF. For real high resolutions with more AA/AF samples and every setting absolutely maxed out it needed something in the 8800GTX league (and of course one has to encount the CPU/platform differences between those two timeframes too). Anyway with that in mind it shouldn't be too hard to calculate how long it took to come from one end to the other or how many GPU generations exactly.

Huh? I played Farcry maxxed out on my 9800pro. It was only at 1280x1024 with no AA but 16xAF if I recall correctly. It was very playable most of the time but there were a few choke points in the game that went into the teens.

My 8800GTS 640MB rips through it at the highest settings, 1920x1200, 16xQ CSAA and 16xAF. The only setting it can't handle too well is if I enable TSAA. But that kills most games with a reasonable amount of foliage.
 
Huh? I played Farcry maxxed out on my 9800pro. It was only at 1280x1024 with no AA but 16xAF if I recall correctly. It was very playable most of the time but there were a few choke points in the game that went into the teens.

My 8800GTS 640MB rips through it at the highest settings, 1920x1200, 16xQ CSAA and 16xAF. The only setting it can't handle too well is if I enable TSAA. But that kills most games with a reasonable amount of foliage.

Playablility can be a subjective matter. Still where's the surprise in my former example when your own example is several points above your original experience anyway (no AA to 16xQA by itself is a giant step let alone the resolution difference)?

Anyway why aren't you using TMAA instead? The performance drop is quite expectable with TSAA since with 16xQ + TSAA you're actually throwing 8xRGSS on a fairly high amount of alpha tests.
 
It would be nice to have flexibility with TA Super-sampled with single GPU's when using x8Q or x16Q instead of being forced to use x8 TA super-sampled on Alpha Tests. If one could have the ability to add x2, x4, x8 TA super-sampled - when using x8Q or X16 Q would be most welcomed. Or x2, x4 TA super-sampled -- when using x4, x8 CSAA, x16 CSAA.
 
Status
Not open for further replies.
Back
Top