The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
http://www.tcmagazine.info/comments.php?shownews=15261&catid=2

With the first-generation DirectX 10 line-up from both AMD and Nvidia settling in, it is that time again, time to talk about what's coming up next. And since AMD is only now getting the low- and mid-range cards out the door, this new trip to Rumorville will be focused on the Green Goblin of Santa Clara, better known as Nvidia. Previous sightings of the G92 chip have suggested that the card powered by it will be the next high-end offer, with 1 TFlop worth of performance but the latest rumors say that we're actually looking at a new chip that will be built on 65nm and that will be put on a upper mid-range card.

The info tells us that the next mid-range card will feature the G92 and will have 512 MB, a 256-bit memory interface and that it will perform better than the 8600 GTS and right under the 8800 GTS. In addition to the mid-range refresh Nvidia will be releasing a new high-end card (G90?), and all that will happen in November. Wait for it.

So what theyre saying is that both G92 and presumingly G90 (the high end) will launch together?

IF they can make this G92 a single slot solution, ill be happy and gladly buy the card.
 
I don't disagree with most of what you said there, with one (big) exception: What makes you think those are interim solutions? Because I'm pretty sure they are not.
Because I do not think they will be DX10.1 and the code names does not indicate there are big changes. Bot companies have big gaps between mid-range and high-end in the DX10 portfolio and both lack of a fast DX10-GPU with an acceptable power-consumption(65/55nm!) for the important notebook market.
I think G92/RV670 will be these chips. But with their attributes they are also predestinated to be used in approved dual-GPU-solutions to make some money in the (post-)christmas-sales in enthusiast-market.

It will not be a 'checklist' feature, as it won't be on GeForce checklists and Tesla users couldn't care less about checklist features.

Throughput is obviously going to be a few to several times lower than FP32, but this is to be expected. Consider the DP CELL: It achieves 'only' 100GFlops in DP mode. If FP64 on G92 is 3-5x slower, it can still easily beat that.
Sure, "checklist feature" was a little bit to hard, but I think it will be a less expensive implementation but enough to be competitive.
 
Because I do not think they will be DX10.1 and the code names does not indicate there are big changes.
But DX10.1 is a very small change. It is arguably the smallest change that has ever warranted a version increment in all of DirectX's history... That doesn't mean there aren't some nice goodies in there, of course, but it's really just small stuff that didn't make the cut into DX10.0...

Bot companies have big gaps between mid-range and high-end in the DX10 portfolio and both lack of a fast DX10-GPU with an acceptable power-consumption(65/55nm!) for the important notebook market.
The fundamental reason behind this is that both companies went with a 3-chips line-up, and yet also included a 'mega-chip' in terms of die size. You simply cannot hit all market segments with that kind of constraint.

I think G92/RV670 will be these chips. But with their attributes they are also predestinated to be used in approved dual-GPU-solutions to make some money in the (post-)christmas-sales in enthusiast-market.
Correct. Well, presumably G92 wasn't the original plan (G82?) but that's besides the point.

However, you keep assuming that NVIDIA and AMD want to have a new monster-chip at 400mm2+ and that this is nothing but an intermediary step. I don't see why you are so confident in that possibility. Everything points in the opposite direction, IMO...

What you might see on the NVIDIA side is a shrink of G92 to 55nm. This might not even be a straight shrink, and change some things slightly... However, the point remains that it should presumably be a smaller chip, not a larger one!

Sure, "checklist feature" was a little bit to hard, but I think it will be a less expensive implementation but enough to be competitive.
Obviously, the GPU remains optimized around FP32. But that doesn't make FP64 any less important for some specific parts of the GPGPU market... And the cost for a decent throughput rate (3-5 cycles?) should be 5-10% at worst. This is more than enough to be very competitive with alternatives in that market.
 
With respect to "stores" in CUDA, I believe they are coalesced, making them reasonably efficient if they are coherent, and only "uncached" if they are generic scatters (which need not be cached anyways). Of course bad cases can be constructed, but the typical programs run at least as efficiently as if they were done through GL/DX.

Also with doubles I suspect G92's core will support them, potentially at half-speed (or worse on the GeForce models). Thus I would not be surprised if they are supported for "in-shader math", since DX10 for instance already has a double type. I *would* be surprised if graphics API's added the associated storage formats anytime soon (meaning you can't read doubles from memory), and I suspect ROP support will come even later (meaning you can't write doubles to memory). These operations will of course be available in CUDA with very little modification to the language required.
 
This is a ripoff of a previous VR-Zone rumour, as far as I can tell. Let's simply say that some parts of it probably shouldn't be taken too literally.

One thing I do not understand is how G92 can be 3/4 or full scale of the G80?

The shader clock speed as well as the complex of the scale.

R600 is suitable for die shrink, on the other side, G80 is hard to apply the die shrink wthout the proper redesign.
 
This doesn't make a lot of sense to me. In fact, I could come up with more believable rumours in just a few seconds, so... That doesn't mean it's wrong, of course, but it's not quite enough to convince me it's right! ;)
 
translation?
Maybe that helps you:
http://translate.google.com/transla...&hl=de&ie=UTF-8&oe=UTF-8&prev=/language_tools

This doesn't make a lot of sense to me. In fact, I could come up with more believable rumours in just a few seconds, so... That doesn't mean it's wrong, of course, but it's not quite enough to convince me it's right! ;)
Sure we should take it skeptical, but the guy is a moderator at this board and pcinlife was in the past a good source for prematurely facts.

He also says something about the G90, which should be a big chip with 512Bit MC, but he says NV wants to lower risks and canceled this GPU resp. it is in reserve, if it is needed some day.
 
Last edited by a moderator:
If seems to me a 3/4 of G80 + is G92.

Although I can understand the chinese, the entire content is not as true as I expect.

How to solve the interconnection between the two chips(G92 X2)

Furthremore, G92X2 does not mean two times of G8800GTX performance.

Why does it have to ?
Can't 1.5x or 1.8x be good enough ?

Also, i believe the interconnect thing may have something to do with the Hybrid SLI tech that they have announced on future IGP's, albeit tuned for high-end performance instead of dynamic power-savings.
DX10 also removes some to the constraints that prevented the 79xx GX2 solutions from correctly scaling their performance with the number of GPU's in many games.
 
Maybe that helps you:
http://translate.google.com/transla...&hl=de&ie=UTF-8&oe=UTF-8&prev=/language_tools


Sure we should take it skeptical, but the guy is a moderator at this board and pcinlife was in the past a good source for prematurely facts.

He also says something about the G90, which should be a big chip with 512Bit MC, but he says NV wants to lower risks and canceled this GPU resp. it is in reserve, if it is needed some day.

Sounds like a bag of salt to me.
 
I wonder how many SP G92 has and what about price?? I hope there will be 200-250$ card....

BTW. This is Midrange (GF9600??) part or more like GF9800GTS??
 
Status
Not open for further replies.
Back
Top