So... what will G80/R600 be like?

DemoCoder said:
I think it's in the roadmap, has always been there, the only difference is, the degree to which the roadmap has been compressed by pressure from ATI and the hype around unified.

I think it's a little different...I believe the pressure for NV to move to unified "earlier than they would have preferred" is likely due to the final form that DX10 is taking. In other words, DX10 likely pushed chip economics toward a unified solution. ATI was probably more successful in lobbying MS to get DX10 more "unfied friendly", than nVidia was in keeping DX10 "unified averse."

On a related note, I'm sticking to my guns that Vista won't be available to 2007...and that's when the first G80 (assuming unified) / R600 parts will also appear. Nothing but a pure hunch on that.
 
Even in DX9, VS3.0 and PS3.0 instruction sets were converging. I think regardless of DX10, chip economics are already pushing for a unified approach. Putting so many vertex shaders on the die wastes space (plus now we have geometry shaders), and ILP has been proven not to scale, so if you're going to put as many ALUs on the chip as budget will allow, you need TLP, and TLP+huge numbers of ALUs cry out for unified. I don't think the NV engineers are dumbasses who aren't familar with all of the various architectural advantages and designs out there, just like Intel was blind to dual core, or TLP, they just tried to squeeze as much out of ILP as they possibly could instead of abandoning ship earlier.

I think NVidia PR operates on the "if a competitor is going to beat us to market with X, than downplay important of X" algorithm.
 
DemoCoder said:
Even in DX9, VS3.0 and PS3.0 instruction sets were converging.

Right...just not enough in the PC space to push vendors toward a unified hardware environment.

I don't think the NV engineers are dumbasses who aren't familar with all of the various architectural advantages and designs out there...

?? Neither do I...I certainly was not implying that.

I think NVidia PR operates on the "if a competitor is going to beat us to market with X, than downplay important of X" algorithm.

Sure...as does everyone.
 
DemoCoder said:
I think NVidia PR operates on the "if a competitor is going to beat us to market with X, than downplay important of X" algorithm.

NVIDIA isn't the only one... I remember Intel saying "We don't see a need for 64 bits on the desktop anytime soon" then Yamhill got leaked and now Pentium 4's are X86-64. I think it is pretty common everywhere in every marketplace.
 
DemoCoder said:
I think it was probably in the roadmap. But I think NVidia PR is divorced from NVidia engineering.

Well, I can't ever recall NV's PR making any comment about it. The commentry and reaction has stemmed from Kirk and he's not divorced from engineering.
 
Fodder said:
Are we still expecting G80 to be a multi-core (and not just duplicates) GPU?

:oops: Don't take my sig as the Conventional Wisdom! It's labeled "Wild-eyed Flyer" for a reason, y'know!
 
geo said:
:oops: Don't take my sig as the Conventional Wisdom! It's labeled "Wild-eyed Flyer" for a reason, y'know!
What means Wild-eyed?

R600
~450-500 Mio Transistors
Unifeid Architekture
256bit Memory Interface
FP16 Filtering reloaded
Vertextexturing reloaded
No new AA oder AF Modes

G80
nealy same transistorcount
Not unified, 32 Pipes
256bit Memory Interface
maybe better MultisamplingAA and no better AF. I think the design is too far in the production to make this change. (dont think, nvidia knew about R520 HQ-AF early enough)

That are my thoughts till now.
 
Thats correct Dave when nvidia engineers were talking about R600 Vs. G80 the nvidia engineers were saying no need for unified drivers yet. So now all of the sudden nvidia's G80 is a unified Arch. I don't believe that for 1 second.
If its true than the G80:cool will arrive way after the R600 at least 6months. :cool:
 
mapel110 said:
What means Wild-eyed?

Wild-eyed = Well, wild eyes. Unshaven. Dishelveled hair. Possibly a little drool. Think the type of guy you see on the corner in some downtowns preaching animatedly to the air as disinterested folks walk by.
Flyer = highly speculative. Think internet day trader during the .com boom.

I may have to rethink this whole sig business. :LOL:
 
Dave Baumann said:
Well, I can't ever recall NV's PR making any comment about it. The commentry and reaction has stemmed from Kirk and he's not divorced from engineering.

Divorced in the sense that what they say in public has no bearing as to what is in their roadmap. Kirk was out there saying 256-bit buses weren't needed, yet the NV35 arrived 1 quarter later with 256-bit. So are we to believe that Kirk had an epiphany the day after he said that, and NVidia magically rushed a 256-bit NV35 architecture into production in such a short period of time, or that NVidia had planned on 256-bit bus all along, and Kirk was just broadcasting NVidia PR talking points for damage control? I rather think that the NV35 was what the NV30 was originally supposed to be, but because of time constraints, delays, problems with their manufacturing process, the NV30 hackjob became neccessary.

NVidia has some patents on unified technology. That means atleast one of their teams has been working on it either at the architectural design or implementation stage. Thus, Kirk badmouthing unified shading I think has little bearing as to how much internal effort Nvidia is placing on it in their roadmap. In fact, given the long timespan unified is taking to get to market, it would be bad strategy for PR comments to accurately reflect what you are working on.

I don't believe in a late-stage alteration of the G80 to be unified. Unified requires large architectural changes. You don't just take a traditional design like 32-pipe G70 and stuff a unified architecture into it. At this stage, changing a non-unified G80 to be unified would be like throwing away your design and starting over. This would be pretty bad from the standpoint of trying to hit the Vista-timeline unless you are 100% sure of a Vista delay.

So, if the G80 does turn out to be unified and is delivered in 2006, then I believe either one of two situations exist: Either a) the G80 always was going unified (or the decision to change was made a long time ago) or b) the G80 has been scrapped, and a G90 unified core has been renamed to be the G80.

Scenario c) which is NVidia burned the midnight oil to refactor the G80 into a unified design two to four quarters before launch, I think is highly unlikely. Especially since they also have ot deal with a process change as well.
 
Last edited by a moderator:
DemoCoder said:
Scenario c) which is NVidia burned the midnight oil to refactor the G80 into a unified design two to four quarters before launch, I think is highly unlikely. Especially since they also have ot deal with a process change as well.

That would depend on when you expect it and what process you expect it to be.
 
I expect R600 to be quite like "doubled xenos",
16 pipelines/ROPs, 96 shader ALU's ,
32 + 32 TMU's.

though not sure about the eDRAM; there is the problem of having enough it for big resolutions without having to draw the screen in many parts ( which would require storing the vertex information etc.. )
 
Last edited by a moderator:
Jawed said:
Ah yes, the infamous NV50 (G80) has been cancelled rumour.

Jawed


well i said this before but i guess i can do it again.

Nvidia is against Unified architecture citing that they wont use it until it shows benefit, and will problably continue to operate on highly programmable pipelines instead. This can benefit both in cost and dificulty to produce as well as overall speed (perhaps). I do not think they will launch their DX10 compatable products without a Unified part however. I expect them to very much launch either/or meduim/low end parts based on unified architecture. Both to get a feel for production and to aid in driver maturation for a flagship Unified part which may or may not come sometime in 2007 so that they arent releasing products that prove to be immature due to drivers. Lets face it, a retail product release helps both companies mature drivers far more then in house driver production. People can say company X has had so much time to do drivers that when it launches it will already be top notch, but i have never seen that as the case. The best performance drivers seem to come between 3-6 months after a launch and driver performance imrovments, both insignificant, and significant, continue through the products cycle.

The NV50 core has been in production for Vista and DX10 for quite a long while. Almost 2 years by my judgment. Both companies have had access to the ever changing DX10 API for well over a year. I see the "G" code named cores, as the stop gap between the NV40 and NV50. Think of it as hey look we've had this core on the burner for awhile but Microsoft keeps changing things as well as pushing release dates, we need to do something about our product inbetween then and now or we'll be infringing on codenames. Obviously Nvidia wont change their time table or core succession, so enter the G70 and departure for the time being of the NV codename. If there is infact a G80, i very much suspect it to be launched early or mid next year, and most certainly prior to their first DX10 part. And as soon as that DX10 part is introduced, i think we'll see Nvidia go back to NV codenames. This is literally, the best and most logical reason i can come up with for any reason of their code name departure from what they have been using for the last 5+ years.

They will keep riding this tech, NV40 derivative, modifying it through-out, keeping essentially the same SM3.0 technologies, until Vista launches (now late 2006/early 2007). Once that happens, we should see a very matured and substantially impressive/complex core, technology wise from them, as i think they have been working on it(NV50) for quite a long time.

R600 im sure will be its own wonder. Although even in its launch time table i dont think it will use imbedded DRam. Costs too much and will cause problems in games. I believe it takes specific coding to use it.

Thats my theory in a nutshell.
 
Last edited by a moderator:
Sounds to me like you got it pretty good . the EDram though thats the one I have been thinking about for the R600. will they or won't they?:rolleyes:
 
geo said:
:oops: Don't take my sig as the Conventional Wisdom! It's labeled "Wild-eyed Flyer" for a reason, y'know!
I'm referring to an old rumour; there was talk a while back of pixel and vertex ops being separated onto different cores, though I guess unified shaders neuter that one. :)
DemoCoder said:
I don't believe in a late-stage alteration of the G80 to be unified.
Perhaps, like NV40, there are two G80s. :LOL:
SugarCoat said:
This is literally, the best and most logical reason i can come up with for any reason of their code name departure from what they have been using for the last 5+ years.
I figure the dragging out of NV4x simply gave them an appropriate opportunity to re-jig their naming scheme. G for graphics, C for chipsets ... what are the codenumbers for their handheld parts?
 
Last edited by a moderator:
Wasn't Kirk talking about now? As if unified approach isn't optimal for now? When did he say this anyway? A year ago?

I think it's possible that he meant DX9 SM3 generation.
 
DegustatoR said:
Wasn't Kirk talking about now? As if unified approach isn't optimal for now? When did he say this anyway? A year ago?

I think it's possible that he meant DX9 SM3 generation.

David Kirk said:
"We will do a unified architecture in hardware when it makes sense. When it's possible to make the hardware work faster unified, then of course we will. It will be easier to build in the future, but for the meantime, there's plenty of mileage left in this architecture."

Taken from bit-tech in july of this year.
 
Well, i think it's pretty clear he was talking about DX9 SM3 and NV4x/G7x specifically. And he's even saying that "It (unified architecture) will be easier to build in the future".
 
Back
Top