So... what will G80/R600 be like?

Junkstyle said:
Nvidia has to shrink the die. ATI already took the hit and it was nearly a total disaster with the R520. This cant be an easy thing so I'm guessing Nvidia will/could have similar problems.

ATI didn't have a problem with the die shrink they had a problem with the library they were using to design the chip.
 
SugarCoat said:
well i said this before but i guess i can do it again.

Nvidia is against Unified architecture citing that they wont use it until it shows benefit, and will problably continue to operate on highly programmable pipelines instead. This can benefit both in cost and dificulty to produce as well as overall speed (perhaps). I do not think they will launch their DX10 compatable products without a Unified part however. I expect them to very much launch either/or meduim/low end parts based on unified architecture. Both to get a feel for production and to aid in driver maturation for a flagship Unified part which may or may not come sometime in 2007 so that they arent releasing products that prove to be immature due to drivers. Lets face it, a retail product release helps both companies mature drivers far more then in house driver production. People can say company X has had so much time to do drivers that when it launches it will already be top notch, but i have never seen that as the case. The best performance drivers seem to come between 3-6 months after a launch and driver performance imrovments, both insignificant, and significant, continue through the products cycle.

The NV50 core has been in production for Vista and DX10 for quite a long while. Almost 2 years by my judgment. Both companies have had access to the ever changing DX10 API for well over a year. I see the "G" code named cores, as the stop gap between the NV40 and NV50. Think of it as hey look we've had this core on the burner for awhile but Microsoft keeps changing things as well as pushing release dates, we need to do something about our product inbetween then and now or we'll be infringing on codenames. Obviously Nvidia wont change their time table or core succession, so enter the G70 and departure for the time being of the NV codename. If there is infact a G80, i very much suspect it to be launched early or mid next year, and most certainly prior to their first DX10 part. And as soon as that DX10 part is introduced, i think we'll see Nvidia go back to NV codenames. This is literally, the best and most logical reason i can come up with for any reason of their code name departure from what they have been using for the last 5+ years.

They will keep riding this tech, NV40 derivative, modifying it through-out, keeping essentially the same SM3.0 technologies, until Vista launches (now late 2006/early 2007). Once that happens, we should see a very matured and substantially impressive/complex core, technology wise from them, as i think they have been working on it(NV50) for quite a long time.

R600 im sure will be its own wonder. Although even in its launch time table i dont think it will use imbedded DRam. Costs too much and will cause problems in games. I believe it takes specific coding to use it.

Thats my theory in a nutshell.


so you're thinking the G80 is not the NV50 as many have thought, but rather, G80 is another NV4x architecture like G70. I've often thought that might be the case.
 
Megadrive1988 said:
so you're thinking the G80 is not the NV50 as many have thought, but rather, G80 is another NV4x architecture like G70. I've often thought that might be the case.
That's possible. It might be somewhat better to talk about the NV50 vs. R600, then. A possible 90nm version of the G70 may be called the G80, as opposed to G75.

I fully expect to see the NV50 (or its equivalent) to be released late next year, at about the same time as Vista. ATI will likely do something similar.

But from nVidia's statements, it currently seems very unlikely that the NV50 will be a unified architecture. I do, however, have high hopes that it will have good branching performance, MSAA on FP16 framebuffers, higher-quality anisotropic filtering, and, of course, full SM4 compliance.
 
Dave Baumann said:
I think this is the one pertinent to future architectures.
Bingo! That's what i remember as being a year old :)

geo said:
This doesn't pass the giggle test for me tho. Context is all, and that context would be "duh!". Everybody and his brother knows you are talking about Vista and dx10 for that conversation.

I'd find intentional misdirection an easier to swallow answer than "oh, you didn't mean SM3?".
Well....

BitTech said:
Unified Shader architectures

...skip...

"Debating unified against separate shader architecture is not really the important question. The strategy is simply to make the vertex and pixel pipelines go fast. The tactic is how you build an architecture to execute that strategy. We're just trying to work out what is the most efficient way.

"It's far harder to design a unified processor - it has to do, by design, twice as much. Another word for 'unified' is 'shared', and another word for 'shared' is 'competing'. It's a challenge to create a chip that does load balancing and performance prediction. It's extremely important, especially in a console architecture, for the performance to be predicable. With all that balancing, it's difficult to make the performance predictable. I've even heard that some developers dislike the unified pipe, and will be handling vertex pipeline calculations on the Xbox 360's triple-core CPU."

"Right now, I think the 7800 is doing pretty well for a discrete architecture?

So what about the future?

"We will do a unified architecture in hardware when it makes sense. When it's possible to make the hardware work faster unified, then of course we will. It will be easier to build in the future, but for the meantime, there's plenty of mileage left in this architecture."

To me it looks like he was talking about present implementations of USA :) versus their present architecture. He's clearly saying that "7800 is doing pretty well" and then continues with "for the meantime, there's plenty of mileage left in this architecture" (RSX, G70-512M, G7x, 90nm etc -- there are pretty much of future in G7x even now) and he says "We will do a unified architecture in hardware when it makes sense".

When will it make sense? When Vista comes out. When ATI will have USA in the PC scene. So IMO G8x is very possibly a USA right from the beginning of development.
 
wireframe said:
I hope this will include the differentiation between a unified shader API (software) and a unified shader architecture (hardware). I have been itching to have this clarified to me.

Really? What's unclear about "the differentiation" between them in your mind?

Edit: Tho I'm interested to hear Rev have a go at it and such other technical points he'd like to thrown down on in this area.
 
Last edited by a moderator:
DegustatoR said:
When will it make sense? When Vista comes out. When ATI will have USA in the PC scene. So IMO G8x is very possibly a USA right from the beginning of development.
Not necessarily. As David Kirk noted, it is an implementation detail. It's performance in real games that is going to be important.

Part of me really hopes that nVidia will go for a unified architecture, just because the idea of a unified architecture is so simple and beautiful. Part of me doesn't just so that we can see a good showdown between a more traditional architecture and a unified one.
 
nAo said:
According PS3 public roadmap at this time NVIDIA has already produced a high end GPU using a 90 nm process (from a fab they never used before).
Is that something other than RSX, and how will that translate to their usual PC-space fabs?
 
Pete said:
Is that something other than RSX, and how will that translate to their usual PC-space fabs?

Well, "PS3 roadmap" might suggest the answer to the first. . . Or are you being as subtle as a Yendi on that one? (blatant digi shoutout).
 
Heh, OK, assuming it's RSX, I thought different fabs have different libraries or something, so I'm honestly curious how NV producing RSX at 90nm translates into future parts on different fabs. At this point, ATI seems to be a bit ahead of the 90nm game

NV has had the option to keep their 90nm work quiet, considering how well the GF6 and now GF7 are doing, so they may surprise me by jumping in without missing a beat.
 
I personally think G80 will be very similair to G70 at the hardware level, but have things that make it appear and function similair to a USA chip in Vista/whatever software is being used. Is that even possible?

I think R600 is fully USA though. It'll be a Xenos and R580 hybrid, that's mainly the ring bus memory controller from my point of view. With lots of tweaks and be fully ready for whatever feature set is the main one at the time and possible a little bit in the future.
 
SM4 still has vertex and pixel shaders. The only difference is that they're more capable and the instruction sets are the same. This lends itself to a unified architecture, but doesn't necessitate one.
 
Reverend said:
As for the original topic, these two products will be what Vista wants. Know that (with finality) and we have the answer :) . I will start another thread regarding an interesting related and side topic, that of unified shaders (I have a lot to say about this!).

Well? Inquiring minds want to know!
 
Dave Baumann said:
Well, I can't ever recall NV's PR making any comment about it. The commentry and reaction has stemmed from Kirk and he's not divorced from engineering.

I have severe doubts that Mr. Kirk has written or looked at a single line of RTL within the last 3-4 years. He's in management, regardless of what his title out PR persona is.

The rules are pretty simple, if you have more then 1000 people, and you are talking to the public, then you are PR.

Aaron Spink
speaking for myself inc.
 
Chalnoth said:
That's possible. It might be somewhat better to talk about the NV50 vs. R600, then. A possible 90nm version of the G70 may be called the G80, as opposed to G75.

I fully expect to see the NV50 (or its equivalent) to be released late next year, at about the same time as Vista. ATI will likely do something similar.

But from nVidia's statements, it currently seems very unlikely that the NV50 will be a unified architecture. I do, however, have high hopes that it will have good branching performance, MSAA on FP16 framebuffers, higher-quality anisotropic filtering, and, of course, full SM4 compliance.

Considering the age of the actual G70 chip i tend to believe that their next flagship product with significant improvments will be the G80. Anything relating to the G7x series should be minorly modified or on a new process as well as mid-low range series. However, combinging my theory with popular theory creates a strange launch schedule unlike what we've seen before if Vista launches on time. 3 high end products between january 06 and january 07.

G7x die shrink with speed increases, which i dont think will ever exist. They'll shrink the die if they feel they have enough time to benefit cost wise from switching production to 90nm. Although i must use harsh fact and point to how fast they shut down NV40 production do to cost and the G70 supersession. So i dont expect to see this core on 90nm unless it launches before christmas time. But architecture differences as well as 30-40% higher clocks to that? I think thats pushing the realistic spectrum. Dont forget both companies are more then willing to delay launching to stock product for hard launching in quantity. That means both the R580 and the 90nm G7x (assuming its 90nm, had its architecture changed significantly, and has much higher clocks, have to be in full swing now or very soon if they're going to launch as early as people keep saying (early next year?). Something which i just dont see. I think we'll know alot more about plans in Febuary.

G80 with significant improvments to the architecture, 90nm, what everyone expects from the G7x. Still remaining SM3.0. I'd expect this to be the real R580 competitor for spring/early summer. I'd also expect significant advanced shader enhancments from Nvidia here as well although perhaps not quite up to par with the R580.

NV50 speculate what you will, perhaps 80nm, real monster fully DX10 compatable. Winter launch time frame just before holidays. IF MS can stick to a schedule. I'd expect many improvments to come like free AA/AF from both ATI and Nvidia by the time the NV50 and R600 are launched.





Skrying said:
I personally think G80 will be very similair to G70 at the hardware level, but have things that make it appear and function similair to a USA chip in Vista/whatever software is being used. Is that even possible?

I think R600 is fully USA though. It'll be a Xenos and R580 hybrid, that's mainly the ring bus memory controller from my point of view. With lots of tweaks and be fully ready for whatever feature set is the main one at the time and possible a little bit in the future.


The new memory controller that we see on the R520 is beyond its time. I think its fair to actually think that the memory controller was designed exactly for cores like the R600 and simply, well not so simply, implimented into the R520 as well. This memory controller is made to last well into GDDR4. And an ATI employee recently stated that we'd see some serious performance advantages with this memory controller once they can mass produce cards with GDDR4 which i believe it was really designed for.
 
Last edited by a moderator:
SugarCoat said:
Considering the age of the actual G70 chip i tend to believe that their next flagship product with significant improvments will be the G80. Anything relating to the G7x series should be minorly modified or on a new process as well as mid-low range series.
The whole reason for the post you are replying to is that a part named "G80" may actually be G70-based. With a new naming scheme comes uncertainty in how future products are named.
 
My guess is that if NV is keeping to re-using codenames like not calling the 7800GTX-512 an G75 and the like, they'd have enough numbers left until 80 to really keep the G80-label for their first WGF2.0-/USA-GPU. I don't dare to speculate if these two possibilities will coincide or if their first WGF2.0-GPU is another Chip than their first USA-GPU.

I guess, they'll start with trying to decouple the TMU and eventually the texture adress generator from the ALU. At least that would make the most sense considering their already vastly superior math-power and their vastly inferior efficiency using that advantage in existing games, which still heavily rely on texture-fetches and high-quality texture filtering.
 
The label "G70" was clearly chosen to coincide with the GeForce7 series. If nVidia refreshes at least some of its lineup in the Spring on 90nm, what's to prevent them from calling these parts the GeForce8 series? That would likely coincide with some G80 name (and G8x derivatives). There may, after all, be a performance boost to warrant such a name change.
 
Back
Top