The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
can someone link to the article that talks about TMUs? i cant seem to find it. i still don't understand how exactly how TMU, TFU TAU relate to each other.
 
can someone link to the article that talks about TMUs? i cant seem to find it. i still don't understand how exactly how TMU, TFU TAU relate to each other.

IIRC

TAU is essentially fetching the textures.
TFUs allows for a smaller performance hit when enabling AF, as it can filter the units faster than new ones are being fetched?

I'm not quite sure either. Whenever I try to go through G80's architecture in my mind I always think of this picture (CO Arun):

http://www.beyond3d.com/images/reviews/g80-arch/TheG8.png

and realize it's far too much for me to take in sometimes when broken down into the finest pieces.

Dammit, I miss ROPs/PixelS/VertexS/TMU being essentially the key ingredients. Now every thing's all decoupled and minced into finer units, and you're not the only one that finds it confusing sometimes, even if you'd like to understand.
 
Seems ati/amd are second position again now at middle end.

I have a x1950xtx, its run great for a year.
I am gonna upgrade soon, since ati/amd havent released any info on the upcoming nov release it seems that nvidia is the way to go.

8800GT seems like a sweet middle end card to get.
For me it simply are such a great deal that I wonder, what can amd/ati came up with if any?
It's ironic that some complain there is no info prior to a rumored hard launch and others would complain if there was a soft launch. I guess companies can't win.
 
can someone link to the article that talks about TMUs? i cant seem to find it. i still don't understand how exactly how TMU, TFU TAU relate to each other.
I linked it in one of my previous posts, as I said already. It's in Rys' first G80 article. Check the "Reviews" area of this site.

OK, I've spent too much time hacking out an explanation of TAs and TFs, so I'll post it hidden behind spoiler tags if you're so desperate that you'll risk bad info. Otherwise, either read Rys' article or hit up Google or just wait for someone knowledgeable to explain.
At the ever-present risk of talking out of my ass, I'll take a stab at summarizing. The TA fetches the data from memory (color, in the case of a texture). The TF performs a bilinear filter on it per clock. Since trilinear filtering requires two bilinear filters, and the G80 has two TFs to each TA, the 8800GTS/GTX essentially offers free trilinear filtering in the sense that that second TF would otherwise have nothing to do if only bilinear were called for, as the single TA can't feed both TFs simultaneously. I think the idea is the TA fetches the data for TF1 on the first clock, then for TF2 on the next clock while TF1 performs the second bilerp to complete the trilinear filter. I'm losing my grasp on things, though, so I'm not exactly sure how that second TF offers free 2xAF, because it seems you need as much data fetches as you need filter ops. I can't even figure out how my example works for trilinear filtering, as it seems each TF would require data from a different MIP map each clock, unless texture cache is involved somehow. So, don't trust any of this.
 
i see, thx. a little clearer. i feel like a doctor trying to understand the proper way to repair a damages heart valve, but not bothering to learn how to properly open the chest cavity first.
 
Last edited by a moderator:
ok i think i just had a light bulb moment, i know this is basic stuff for you guys but here it goes. the G80 has 8 shader processors, each of which have 4 TAU and 8 TFU. essentially TMU = TAU. the 36.8 GT/sec fiilrate often used fo the G80 is INTERNAL throughput, correct? Based on 575mhz core x 64 TFU. This helps decrease the impact of filtering such as AF because the TA units dont have to wait for TFU, since there is twice as many TFU and thus data is always ready for the TAU? But due to fact there are a total of 32 TAU, actual external texel throughput on the G80 is 18.4 GT/sec. And it is able to reach very close to this theoretical limit because of adequate memory bandwidth....unlike what appears to be going on with the 8800 GT. Unlike the G80 it only has 7 shader processors, but like the G80 has 8 TFU per shader processor. unlike the G80, its number of TFU = TAU, thus is has 56 TFU (7x8) as well as 56 TAU (7x8) [G80 has 32, 8x4]. However, there is a bottleneck present in the 8800 GT that appears to prevent it from even getting close to its theoretical 33.6 GT/sex fillrate. instead it appears to reach only around 25 GT/sec. and so far people think this inefficiency is due to either memory bandwidth, speed of shader processors or perhaps both. I myself can understand how memory bandwidth could play into this, but I cant even start to see what speed of shaders would have to do with it???

now i also need to understand how ROPs play into this.:LOL:
 
Last edited by a moderator:
2900guy

I have written an article that explains modern texture addressing/fetching/filtering units in detail. However the article is in Russian. If you can read Russian (or if there is a decent translator) I could give you a link to it in PM (it is still not published).
 
I couldnt believed that G92 contained nothing new but basiclly its a G80 256bits bus and PCIe 2.0 in 65nm and updated video engine.

I believed that nvidia hidding something in G92 they just not active it yet. :p
 
I believed that nvidia hidding something in G92 they just not active it yet. :p
:LOL:
Mid-November:
"Today nVidia announced all GeForce 8 Series graphic-cards will support D3D10.1 through a later driver-release, which will come contemporaneously with Vista SP1.
They decided to notify this relative late, because in their opinion they have more to offer than just a filled feature-list."

well two G92 cores with full 8 blocks on one PCB and a 512 bit bus would be a beast.
Yes, a beast in power-consumption and BOM, since it should be delivered with 1GiB fast GDDR4, which means for each GPU 1GiB -> 2GiB, because of AFR... ;)
 
:LOL:
Mid-November:
"Today nVidia announced all GeForce 8 Series graphic-cards will support D3D10.1 through a later driver-release, which will come contemporaneously with Vista SP1.
They decided to notify this relative late, because in their opinion they have more to offer than just a filled feature-list."

:D Hope fully.
We can see it more clearly when GeForce 9 series release.Hope its based on G9x or something and we might lucky to see G92 rebranded as GeFore 9600 or even 9700. :cool:
 
2d6vds3.jpg

Look at the graphs!

:LOL:

edit - Didn't realise this was a repost. My bad.
 
Last edited by a moderator:
:LOL:
Mid-November:
"Today nVidia announced all GeForce 8 Series graphic-cards will support D3D10.1 through a later driver-release, which will come contemporaneously with Vista SP1.
They decided to notify this relative late, because in their opinion they have more to offer than just a filled feature-list."

That whould be a litle stupid because they are losing lot of market just because they don´t have DX_10.1 suport/sticker in the box.
So take your conclusions. Things should be very clear since day one, not like that way.
 
2d6vds3.jpg

Look at the graphs!

:LOL:

Pretty common though, everyone does that (yes, even daamit). The PR dudes know how to trick the buyers.

Spend a day downtown and ask this Q: Do you know what RV670 is? If you ask 300 people I bet you'll get 299 incorrect answers. :LOL:
 
Its possible that they didnt support dx10.1 in 8800GT now its becuase it will affected the marketing of GTX and Ultra.

Note : Back 2-3 months before we know what G92 really is.Every rumors said its support dx10.1 and openGL3. ;)
 
Status
Not open for further replies.
Back
Top