The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
G92 is a performance-GPU.
The ones, who want to have 1 TFLOPs in one chip and more units than on G80 have to search another codename...
Last I heard, G80 shrunk to 65nm with more ALUs would probably fit in the die size requirements of a 'performance GPU'... I don't know if it's that though, but it is worth pointing out.
 
Can you refresh me on the RV610.
Already enough low and very low end DX10 cards. no ?
A new midrange (G98?) and high end (G92?) is what I'm waiting for.
 
from a brazilian site (translated by google)

G92 has GPU 800MHz?



The NVIDIA is keeping to the 7 keys the details on the G92 and G98, next the Chips that will have to make the head of the gamemaníacos.

However, some information are circulating for web, of that the G92, version TOP of the company, will have GPU working between 800 and 850Mhz (depending on model - GTX/GTS), thanks to its process of manufacture in 65nm. Clocks final will have clearly to be defined only at the last moment, waiting á for information on its rival, R680.

Another detail is that chip will have to bring improvements in the support the graphical API OpenGL 3 (Long Peak), or same already to bring compatibility with the Mount technology Evans (brought up to date version of the Long Peak). It does not have consensus will be had has supported to DirectX 10.1.


G98 will have to be the next success to the NVIDIA



Many enthusiastic ones of the segment are if asking what it comes to be chip, codinome G98 of the NVIDIA.

According to information harvested for sources next the giant to the GPUs, the such chip G98 nothing more than what new chip come back toward the intermediate segment, bringing as great newness bus of developed memory (probably with 256bits) and greater amount of shaders.

Thus, the plates based on Core G98 will have to follow the same steps of current generation 8500/8600, that is, critical success of public and. The suggested price would be something around US$199.


hã ??
 
Last I heard, G80 shrunk to 65nm with more ALUs would probably fit in the die size requirements of a 'performance GPU'...

Full G80 with added DP and maybe 192SPs (8*24) should fit in 200-250mm²? :???:

Can you refresh me on the RV610.
RV610 is very nice low-end chip, which is very small (~70mm²) and has nice features (UVD). So NV shrinks G86(~120mm²) to 65nm and add an new video-processor (VP3) to compete -> G98.

A new midrange (G98?) and high end (G92?) is what I'm waiting for.
Mid-Range/Mainstream will come in spring 2008 -> G96.
 
Last edited by a moderator:
Full G80 with added DP and maybe 192SPs (8*24) should fit in 200-250mm²? :???:
DP doesn't add much since it's quarter-speed. As for die size, 200-300mm2 is definitely possible for that IMO, especially so if you consider G70->G71. Which was just a half-node, not a full-node...

So NV shrinks G86(~120mm²) to 65nm and add an new video-processor (VP3) to compete -> G98.
Well, they'd also want to reduce the number of ROPs to 4 (since it'll presumably be 64-bits only) and probably increase the number of ALUs. It certainly doesn't make sense to increase the number of TMUs if you can't increase the bandwidth on the other hand, it obviously has to use DDR2 for cost reasons. I wonder if G98 will be bigger or smaller than 80mm2...

Mid-Range/Mainstream will come in spring 2008 -> G96.
Out of curiosity, where is that info coming from? :) Also, I am wondering if G96 is 55nm, hmm... Sadly, given that codename, I am skeptical performance will be that spectacular. Presumably G92's pricing will hit $199 eventually, making that irrelevant.
 
from a brazilian site (translated by google)

G92 has GPU 800MHz?



The NVIDIA is keeping to the 7 keys the details on the G92 and G98, next the Chips that will have to make the head of the gamemaníacos.

However, some information are circulating for web, of that the G92, version TOP of the company, will have GPU working between 800 and 850Mhz (depending on model - GTX/GTS), thanks to its process of manufacture in 65nm. Clocks final will have clearly to be defined only at the last moment, waiting á for information on its rival, R680.

Another detail is that chip will have to bring improvements in the support the graphical API OpenGL 3 (Long Peak), or same already to bring compatibility with the Mount technology Evans (brought up to date version of the Long Peak). It does not have consensus will be had has supported to DirectX 10.1.


G98 will have to be the next success to the NVIDIA



Many enthusiastic ones of the segment are if asking what it comes to be chip, codinome G98 of the NVIDIA.

According to information harvested for sources next the giant to the GPUs, the such chip G98 nothing more than what new chip come back toward the intermediate segment, bringing as great newness bus of developed memory (probably with 256bits) and greater amount of shaders.

Thus, the plates based on Core G98 will have to follow the same steps of current generation 8500/8600, that is, critical success of public and. The suggested price would be something around US$199.


hã ??

Easier if you were to go to the original source, but if you have the link i can translate it better than the big G.
 
Latest rumours from HKEPC about G92 and friends (hope i've interpreted that Google auto-translation correctly):

- 65nm TSMC.
- To be launched on November 12th (dunno if by that they mean a hard-launch or a paper-launch, though).
- Will replace the 8800 GTS directly, in anticipation of a possible December-ish RV670 launch.
- There is a G98 ready to compete against a RV620, launch dates still unknown on either of them. Could be a G84 replacement, for a CES 2008 January introduction (hinting at mobile part first, desktop version in late Winter/early Spring).
- They say it has a "TCCD" memory controller (let's hope they don't mean the famous Samsung PC3200 IC's :D).
- PCI-Express 2.0, DX10.1 (as expected), DisplayPort and HDMI support.
- Purevideo Generation 3.
- G92 will remain strictly a "Geforce 8" family member, despite what the codename might have implied.

As always at this stage, take it with a pinch of salt.

http://www.hkepc.com/?id=45

G92 as a 8800 GTS replacement alone doesn't make much sense to me, unless it's already too costly to make it at the old 90nm (better yields of a large die on an old process isn't worth it anymore relative to the newer process sheer amount of good dies, etc).
But the issue of the high-end part is still unresolved with this rumor, hence my reservations about it.
 
Last edited by a moderator:
Seems to me hkepc has usually been pretty accurate with their leaks. I don't really see the point of a 65nm 8800GTS replacement a year after release though.
 
The transistion to 65nm process could reduce the die size quite abit and lower power consumption. Would be great if this card is a single slot monster.

With the reduced bandwidth, but intended to "replace" the 8800GTS 320bit card implies that the card would still perform on par or faster than the 8800GTS. Are they pairing the G92 with fast GDDR4? I also wonder if the ALU/TEX ratio will be changed.

So G92 isnt the 1 tera flop monster? the whole GX2, or dual G92 sure sounds more realistic unless G90 exists. (Or whatever its called)

8800GT or 8900GTS?

Which sounds nicer?
 
The transistion to 65nm process could reduce the die size quite abit and lower power consumption. Would be great if this card is a single slot monster.

With the reduced bandwidth, but intended to "replace" the 8800GTS 320bit card implies that the card would still perform on par or faster than the 8800GTS. Are they pairing the G92 with fast GDDR4? I also wonder if the ALU/TEX ratio will be changed.

So G92 isnt the 1 tera flop monster? the whole GX2, or dual G92 sure sounds more realistic unless G90 exists. (Or whatever its called)

8800GT or 8900GTS?

Which sounds nicer?

With all those technical enhancements (DX10.1 in particular) im betting on 8900GTS.
 
Latest rumours from HKEPC about G92 and friends (hope i've interpreted that Google auto-translation correctly):

- 65nm TSMC.
- To be launched on November 12th (dunno if by that they mean a hard-launch or a paper-launch, though).
- Will replace the 8800 GTS directly, in anticipation of a possible December-ish RV670 launch.
- There is a G98 ready to compete against a RV620, launch dates still unknown on either of them. Could be a G84 replacement, for a CES 2008 January introduction (hinting at mobile part first, desktop version in late Winter/early Spring).
- They say it has a "TCCD" memory controller (let's hope they don't mean the famous Samsung PC3200 IC's :D).
- PCI-Express 2.0, DX10.1 (as expected), DisplayPort and HDMI support.
- Purevideo Generation 3.
- G92 will remain strictly a "Geforce 8" family member, despite what the codename might have implied.

Close to 8800GTS performance in the 220$ segment with 512mb memory less heat and power consuption not sounds bad, 2 months ahead of rv670, and 4 days before Crysis launch, what a "surprise" :smile: (Crysis G92 optimized beta driver anyone? ;))
 
Last edited by a moderator:
VR-Zone reports also about this:
http://www.vr-zone.com/?i=5207
[strike]... but they also mention that G92 will have 50GTex/s fill-rate. :oops:

But I think this value belongs to the GX2: 2x32TMUs/TAUs * 780MHz -> 50 GTex/s.[/strike]
edit: 5.0GT/s for PCIe 2.0 an not fill-rate... X-D

G92 with 4 Clusters @ ~800MHz / ~2GHz(SD) should be a very good/cheap replacement for 8800GTS, which should drop in price in early 2008 below $200.
 
Last edited by a moderator:

I just read the translation and find this:

In addition, the company also issued another good news was reported earlier on the face of Nvidia G80, G84 and WEI chip will support DirectX Shader Model 10.1 and 4.1, so that once plagued the market scares of the problem.10.1及Shader Model 4.1.But Nvidia has recently clarified the news, saying the news is rumors, Nvidia said its G80, G84 and the chip is already WEI 10.1 and DirectX 4.1 Shader Model prepare, when Microsoft released DirectX 10.1, Shader Model 4.1, NVIDIA Drivers provide support programs, the problem will be resolved smoothly.
 
edit: 5.0GT/s for PCIe 2.0 an not fill-rate... X-D
I am pretty sure VR-Zone was thinking of a 50 GigaTexels/s fillrate there and probably wasn't confusing it with 5.0GT/s for PCIe.

Whether that information is correct is another problem, of course. However, may I point you in this direction for a better understanding of how NVIDIA has defined GTexels/s numbers since the G80: http://www.dailytech.com/NVIDIA+G80+Retail+Details+Unveiled/article4441.htm

I have absolutely no clue whatsoever if my guess is correct there, but it does make a fair bit of sense for NVIDIA to have kept the same number of TMUs as in the G80 and just clocked them higher.
 
Status
Not open for further replies.
Back
Top