NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
My thoughts exactly.


GT200, D9E, G9X, or whatever, is NV55, a major refresh / overhaul of G80. Like what NV47 / G70 / GF 7800 GTX was to NV40 / GF 6800. Not a totally new architecture (as G80 is from NV4x/G7x) but still a new GPU. A highend GeForce 9 series.

Nvidia's next-gen NV60 / GeForce 10 series, I don't expect to see that until late 2009, or around the time Larrabee becomes a product.

So you`re saying GT200 = D9E = G9x? If true then GT200 is GF9800 and will be launched in March but as you see G9x doesn`t offer any architectural improvements over G80. How could it be significantly faster than GF8800Ultra? They need a very powerful new GPU because Rv770 is said to be about 50% faster than Rv670.
 
higher clocks more stream processors more bandwidth that alone will give it a nice advantage before any of the modifications that we saw for the new GTS which performs better then the GTX with less stream processors.
 
That's a ridiculous extrapolation and claim. R600 wasn't that fast, so it wasn't bandwidth-starved and 512-bit was massive overkill. Now, if what you want to see is 2x the performance of G80, good luck getting that with 256-bit or 384-bit GDDR3... Unless what you're thinking of is 256-bit GDDR5, in which case that's another (and much more complex) debate entirely!

Is it really "that" important how do you get to bandwidth amount X? Anyway if I'd face the above dilemma my very first thought would be that it's somewhat outside of NV's policy (from the recent past) to deal with any kind of risks. If they've hypothetically laid out a core with let's say 4 ROP partitions and GDDR5 as a target, then it sounds like that availability will depend too much on ram availability for timeframe X. It's a totally different story if technical hickups delay your projected release dates than ram availability.

Anyway I have a slight problem with my layman's estimates because with G71 they removed a very healthy amount of redundancy. A first quick look on G92 doesn't point in that direction yet, because we might have not yet seen what the peak ALU frequencies are that core can hit.

Two of your initially posted blurbs don't seem to stick that well with me; one is the lack of 10.1 support and the other being 65nm. How many R&D resources does it really take to implement 10.1 requirements after all? And why would they still remain on 65nm, when one thinks of the transistor count this hypothetical thingy might have and 55nm being used for almost a year when that thingy is estimated to hit shelves?

Before though we get into those details, I'd like to read a couple of educated guesses about G92 transistor density and if they've removed any redundancy there (always in combination with the 2 less ROP partitions compared to G80).
 
If there will be really 512-bit mem interface it must have 32ROPs. I wonder if GT200=G9x so G100=GT200 too?
 
I suspect there could be further improvements to current G8x/G9x SPs. Maybe dual MADD? Or could this architecture be totally different to what we see now? Probably not, but im guessing we are going to see something along the lines of a NV40 to NV47 jump.

Easy ones to guess are
Probably using the 65nm process.
Unified architecture
Video engine
DX10.1/S.M 4.1
PCI-e 2.0
 
So you`re saying GT200 = D9E = G9x? If true then GT200 is GF9800 and will be launched in March but as you see G9x doesn`t offer any architectural improvements over G80. How could it be significantly faster than GF8800Ultra? They need a very powerful new GPU because Rv770 is said to be about 50% faster than Rv670.


Well I have no idea about the GT200 name, all I am saying is Nvidia's next highend GPU (that's not a revised current GPU put on a new card) that's most often known as D9E or 'NV55' will be a refresh/overhaul of G80. When I said G9x, I was leaving room for Nvidia to have a G9x that IS a major architectural improvement over G80/G92. Though it may indeed be called G100.
 
Well I have no idea about the GT200 name, all I am saying is Nvidia's next highend GPU (that's not a revised current GPU put on a new card) that's most often known as D9E or 'NV55' will be a refresh/overhaul of G80. When I said G9x, I was leaving room for Nvidia to have a G9x that IS a major architectural improvement over G80/G92. Though it may indeed be called G100.

I seriously doubt that, 9800GX2 should be the highest of high end of 9800-series, so it leaves little to no room for a new chip.
GT200/G100/whatever should be "GF10k" or something similar
 
Look at this situation. GF9800GX2 is expected somewhere in March (Cebit?). The question is why NVIDIA is releasing this card? If NVIDIA has a much faster GPU why doesn`t release it now instead of GF9800GX2? Then it seems GF9800GTX will be slower than GX2 i think.
 
Look at this situation. GF9800GX2 is expected somewhere in March (Cebit?). The question is why NVIDIA is releasing this card? If NVIDIA has a much faster GPU why doesn`t release it now instead of GF9800GX2? Then it seems GF9800GTX will be slower than GX2 i think.
The much faster chip is supposed to have arrived in November-ish. When it didn't everything got scrambled.

Jawed
 
Look at this situation. GF9800GX2 is expected somewhere in March (Cebit?). The question is why NVIDIA is releasing this card? If NVIDIA has a much faster GPU why doesn`t release it now instead of GF9800GX2? Then it seems GF9800GTX will be slower than GX2 i think.

If all those scenarios are true, then probably for the same reason they released the G71 in the past. A "gap-filler" until the next major update/refresh. Even 6 months are a mighty long time in this market and it's never really a good idea to leave too large gaps in between product families. How long can they really sit on the past G8x laurels without any even moderate performance increase?

They obviously need an answer for AMD's R680; no answer means immediately better sales for AMD.

As to why they won't release it right now: IHVs don't sit on ready chips and breed on them until they turn green and hairy heh. If we're talking about a huge increase in chip complexity/transistor budget it does take it's time. If one of the supposed newblurbs in the initial post of this thread should be close to reality, ~1.8 billion transistors under 55nm sounds like a huge step. Why not today? Cause it ain't possible today.
 
G100 (D9E)
55nm
>1 billion
dx10.1
PCI-e 2.0
256 vertex shaders
16 ROPS
128 texture clock
780core 2Ghz shader
fillrate 99840 Mtexels
512bit bus
Gddr4 at 2Ghz
memory bandwidth 128GB/s
release March 2008

It has been posted on Xtremesystems by some guy :) Do you think it could be true? I`m thinking about release date which is March 2008. GF9800GTX and GX2 at the same time ?

PS. And only 16ROPs with 512-bit memoty bus hmm.
 

Uhmmm ironically 16 ROPs could eventually make more sense then the rest of the bullshit heh...

256 vertex shaders

Hellooooo! We have USCs nowadays....

128 texture clock

hmmm....

780core 2Ghz shader
fillrate 99840 Mtexels

ahhhh so that's 128TMUs@780MHz; well whatever.....

512bit bus
Gddr4 at 2Ghz
memory bandwidth 128GB/s

That's the best kneeslapper of them all; 2GHz ram @512bit gives 256GB of bandwidth.

release March 2008

sheesh not a single mention of a ton of eDRAM; I'm fairly disappointed :p

It has been posted on Xtremesystems by some guy :) Do you think it could be true?

Supposed speculators will continue to compile senseless lists; it comes down to the reader to define what can or cannot make sense.

I`m thinking about release date which is March 2008. GF9800GTX and GX2 at the same time ?

IMHO if there's going to be a 9800GTX it's merely going to be a very high clocked G92. Logically it would replace the 8800GTX/Ultra as a followup single chip high end sollution.
 
It has been posted on Xtremesystems by some guy :) Do you think it could be true?
To me, that feels like a mix of what was posted at hardware-aktuell (see this thread's first post) and a bunch of random bullshit.
 
I expect the next-gen Nvidia card to be DX10.1

Reason?

Well, Nvidia suprised everyone with the NV40 - 6800 which was SM3.0
Nvidia suprised everyone with the G80 - 8800, reportably to be a ps/vs non-unified DX10(SM4.0) card.

US
 
I don't think being able to support DX10.1 is the problem. It's likely a political choice: unless you replace your line-up from top to bottom to support DX10.1 on the same day, supporting it would only indirectly encourage sales of the competiton's lower-end products which do support DX10.1...
 
I expect the next-gen Nvidia card to be DX10.1

Reason?

Well, Nvidia suprised everyone with the NV40 - 6800 which was SM3.0
Nvidia suprised everyone with the G80 - 8800, reportably to be a ps/vs non-unified DX10(SM4.0) card.

US

If you mean 9x00 by "next gen", you're wrong, this has been proved by GF9600 for example.
 
Well GT9600 is G92 .. not next-gen.

US

9600 isn't G92, it's G94 or 96 or something along those lines.
Anyway, that's why I said "if you mean GF9x00" by next gen, you clearly mean the G100/GT200/whatever which will probably be GF10k or some new naming scheme
 
Status
Not open for further replies.
Back
Top