The Official G84/G86 Rumours & Speculation Thread

No problem Geo :p

How concise of you! :LOL:

Well pardon my french, but ATI has had only problems with it (***edit: and yes I mean R600) and I haven't heard the most glorified impressions from NV's side either for G84/6. It sounds rather obvious what is at fault here.
 
Well pardon my french, but ATI has had only problems with it (***edit: and yes I mean R600) and I haven't heard the most glorified impressions from NV's side either for G84/6. It sounds rather obvious what is at fault here.

Yes indeed. It is ATI who screwed up R600 on 80nm, not that there's anything wrong with the 80nm process. These leaked benches seem to suggest a very competitive product in the G84/86, and a 80nm G81 should be even better.
 
You know for a fact that there are no process related issues? How?

Ok, I don't know it, but it seems to be clear that the G84/86 is totally fine but the R600 is not. I believe it is ATI fault that R600 is bad and not the process.

Speaking of which, why would the process have any serious problems to begin with? 80nm is just an optical shrink of 90nm, which means it should work nearly the same.
 
pretty sure there are 3 types of 80 nm processes, the direct shrink wouldn't be too much work, but the HS or LP might incure some changes.
 
Ok, I don't know it, but it seems to be clear that the G84/86 is totally fine but the R600 is not. I believe it is ATI fault that R600 is bad and not the process.

Speaking of which, why would the process have any serious problems to begin with? 80nm is just an optical shrink of 90nm, which means it should work nearly the same.

"Should work nearly the same." is engineering speak for it's going to mess everything up because you thought it wouldn't. :cool:

In addition, I doubt the G84/86 are taxing the process nearly as much as the R600. Does this make the 80nm process bad? Not necessarily, but it may be bad with the R600. So it may be a combination of things.
 
"Should work nearly the same." is engineering speak for it's going to mess everything up because you thought it wouldn't. :cool:

In addition, I doubt the G84/86 are taxing the process nearly as much as the R600. Does this make the 80nm process bad? Not necessarily, but it may be bad with the R600. So it may be a combination of things.

See, we heard that before, back in the NV30 dark ages, so allow us to be skeptic... :D
It too had a deadly combination of new fabrication process (130nm Low-K), new API (DirectX 9), new memory technology (DDR2) and huge cooling requirements (for its time, of course).
And what happened ?
Arrived later than the competing solution, it was slower than expected, hot (not as in "wanted" hot, but as in "ouch !" hot) and it was hard to find after the official launch, eventually living a very short shelf life in light of the debut of the NV35/37/38 series.

Even the R300 didn't risk as much, sticking with the proven 150nm node and standard DDR1 coupled with a wider bus (256bit).
The fact that it stuck to DX9-spec (FP24, emphasis on shader processing speed and image quality) didn't hurt either.
 
See, we heard that before, back in the NV30 dark ages, so allow us to be skeptic... :D
It too had a deadly combination of new fabrication process (130nm Low-K), new API (DirectX 9), new memory technology (DDR2) and huge cooling requirements (for its time, of course).
And what happened ?
Arrived later than the competing solution, it was slower than expected, hot (not as in "wanted" hot, but as in "ouch !" hot) and it was hard to find after the official launch, eventually living a very short shelf life in light of the debut of the NV35/37/38 series.

Even the R300 didn't risk as much, sticking with the proven 150nm node and standard DDR1 coupled with a wider bus (256bit).
The fact that it stuck to DX9-spec (FP24, emphasis on shader processing speed and image quality) didn't hurt either.

Oh yes, the NV40 and R420 later came out on 130nm (may have had low-k, don't remember) and they were fine. NV35 wasn't too bad either, albeit not very exciting. That was proof that it was the fault of the NV30 and nothing else. I suspect the same thing with the R600. Somewhere, ATI screwed up, because even though 80nm may have it's problems, it isn't exactly bleeding end stuff anymore.
 
Oh yes, the NV40 and R420 later came out on 130nm (may have had low-k, don't remember) and they were fine. NV35 wasn't too bad either, albeit not very exciting. That was proof that it was the fault of the NV30 and nothing else. I suspect the same thing with the R600. Somewhere, ATI screwed up, because even though 80nm may have it's problems, it isn't exactly bleeding end stuff anymore.

Yeah, NV40 and R420 came out in the 130nm node... without Low-K and almost 2 years later. ;)
 
Oh yes, the NV40 and R420 later came out on 130nm (may have had low-k, don't remember) and they were fine. NV35 wasn't too bad either, albeit not very exciting. That was proof that it was the fault of the NV30 and nothing else.
Not really. The existence of good chips that came out on time in high volumes on the same node may just indicate that they figured out how to work with the process properly.

No, the real thing that indicates there was something fundamentally wrong with the design of the NV30 architecture was its abysmal efficiency for SM2 shader programs.
 
Yeah, NV40 and R420 came out in the 130nm node... without Low-K and almost 2 years later. ;)

Actually, if I remember correctly, the NV30 was on 130nm without Low-K. Next generation, the r420 and r480 were on 130nm with Low-K, whereas the r430 was on 110nm without Low-K. I think the first high end card Nvidia that had with Low-K was the 7900GTX, which was 90nm Low-K. The 7800GTX was on 110nm, which I believe which did not have Low-K. I forget what the 6800U was, but I think it was 130nm without Low-K though I think some 6800s were made later on 110nm. This is all from memory which may be faulty.
 
You're right about all of that (though I forget if low-k was optional or standard with 90nm), but I remember NV blaming some of NV30's delay on TSMC's low-k problems at 130nm, so it's possible they initially tried to go low-k with their 130nm GPUs.
 
Geez I sure hope it's not running at those clocks in 2D for real. But all signs point to G84 being ready to go.
 
Back
Top