G70 here we come

DaveBaumann said:
The IHV's make different choices at different times and they also have varying sucesses with those choices - its not as though R350 was 130nm nor NV40 130nm low-k.

R350 had a 1/3 transistor advantage because of FP24, and I gather a few other transistor advantages here and there as well. ["advantage" in the sense of size/yield/clock]

NV40 is not low-k, 'tis true, but it isn't not low-k *and* a bigger process.

Then again, if your bus hints pan out, perhaps we won't be able to call this next gen "simarly featured" either. If so, to whose performance advantage remains to be seen.
 
http://www.theinquirer.net/?article=21827
Nvidia core G80 emerges

CeBIT 2005 After G70 comes G80

By Fuad Abazovic in Hannover: Tuesday 15 March 2005, 08:00
NVIDIA'S NEXT generation graphic remains a well protected secret. We still managed to get some information about it, despite that.

We confirmed that G70 is the real thing and we learned that Nvidia has one more chip down the road codenamed the G80. We expect it to be based on 90 nanometre and have more pipes than NV40. We don't think that it will be dual graphics core stuff but you never know, you kow.

We suspect this card should be ready sometimes in September time even some people suggested April as a possibility. But we believe that April is just a poisson d'Avril, or mayhaps a red herring.

Nvidia and its partners are quite confident that they have the right thing but ATI has strong R520 horse for race. Let's see what the future brings. µ





http://www.theinquirer.net/?article=21822
Nvidia investigates dual core graphics

CeBIT 2005 IBM Fishkill involved

By Fuad Abazovic in Hannover: Tuesday 15 March 2005, 08:23
[Advert]
WE LEARNED that Nvidia is looking into making a dual die for a graphics chip. I believe that this is the next "big thing" and the graphic industry will move towards it as soon as it becomes too expensive to make single core chips with many transistors inside. It happened to the CPU industry and it will happen for graphics chips too.

For some time, Nvidia has toyed with the idea of putting two dies on one chip. I am not sure how soon it will happen and whether or not Nvidia's future G70 or G80 actually has to do something with it but it will sure happen over the next few quarters.

It's interesting to note that Nvidia is doing this at IBM Fishkill, the factory that is also doing something about dual cores for AMD.

When we talked with ATI's CEO, Dave Orton confirmed to us that the industry will have to move that way once it starts to make financial sense. I believe that's why Nvidia is preparing and I am sure that ATI is not sitting and waiting with its legs crossed like a wallflower at a dance.

The eternal graphic fight won't calm down anytime soon. It's all about winning the designs and selling more and more since that's what investors expect you to do. µ









wild guess:

Nvidia G70 = NV4x with 20-24 pipes. ( NV47 ? )

Nvidia G80 = NV4x with 24-32 pipes. ( NV48 ? )



8)


maybe at least one of them (G70, G80) is dual core if not both.


*imagines dual-core GPUs used in 2 cards in SLI*

:oops:
 
Where does NV5x fit in? G90? G80? I ask, because when talking about the PS3 GPU, Nvidia mentioned that they may have something themselves to show (i.e. a PC card) from the same tech later this year (which might line up nicely with the Sept date mentioned by the Inquirer). And it's supposedly from a generation beyond NV4x..
 
R350 had a 1/3 transistor advantage because of FP24, and I gather a few other transistor advantages here and there as well. ["advantage" in the sense of size/yield/clock]

If you look at time to market (which is important when talking about processes) and overall sizes, R350 (quoted 107M Transistors) was going against, initially, NV30 (quoted 125M transistors).
 
DaveBaumann said:
R350 had a 1/3 transistor advantage because of FP24, and I gather a few other transistor advantages here and there as well. ["advantage" in the sense of size/yield/clock]

If you look at time to market (which is important when talking about processes) and overall sizes, R350 (quoted 107M Transistors) was going against, initially, NV30 (quoted 125M transistors).

Okay, I lounge corrected. However, this would be why I included "similarly featured" in my original. The transistor budgets were spent very differently with significant (big understatement) performance implications.

My impression from reading B3d reviews is architecturely NV40 and R420 are much more similar to each other than R350 vs NV30 (tho I might quibble a bit and insist on NV35, but will readily admit that still won't get me to the 1/3 I stated above --I checked your table :) ). With ATI set to go FP32 --from what I can see today-- the most significant difference left on the table is about to disappear. . .and we're still left with non-low-k 110nm vs low-k 90nm, and the question of how the latter doesn't hand a royal ass-kicking to the former. Maybe it does and NV just grits their teeth and bears it until Sept/Oct.

Tho it would be interesting to see if ATI did something radical like your bus hints, or something else, and as a result were "only" performance competitive (maybe moderate edge) with many more transistors. The entertainment value of seeing who suddenly flip-flopped on the age old "performance is all" vs "technology leadership is king" debate --and who sticks to their guns-- would be worth the price of admission.
 
Megadrive1988 said:
wild guess:
Nvidia G70 = NV4x with 20-24 pipes. ( NV47 ? )
Nvidia G80 = NV4x with 24-32 pipes. ( NV48 ? )
I think it's more like this:

NV48 = NV40 @ 110nm
G70 = NV47 = NV40 with 20-24 pipelines @ 110nm
G80 = G70 with 24-32 pipelines @ 90nm

NV is a little behind ATI on 90nm adoption. But than doesn't neccessary mean that R520 will be better than G70 (R300 vs NV30 anyone?). And i think that G70 will definately be first to market with much wider availability in retail. As R520 will become truly available (August i think), G80 will be released (though i don't expect it to be really available in retail till hollyday season).

8)
maybe at least one of them (G70, G80) is dual core if not both.
*imagines dual-core GPUs used in 2 cards in SLI*
:oops:
This dual-core GPU talk is somewhat strange. All today GPUs are dual/quad-core already!
 
So, the Inquirer learned that nVidia is going for dual-die, eh? This, more than anything, should convince you that the Inquirer is just full of shit. What they've posted isn't worth even a moment's notice.
 
Well, much like geo is wondering I'm having trouble conceiving of the refinements NVDA can make to NV40 on TSMC's vanilla 0.13 (or 0.11u) process to allow for both more pipes and higher clocks without creating a volcanic GPU. More power to them if they get a reasonable die-size/yield on such a beast, but that hasn't necessarily been their forte in the past.
 
kemosabe said:
Well, much like geo is wondering I have trouble conceiving of the refinements NVDA can make to NV40 on TSMC's vanilla 0.13 (or 0.11u) process to allow for both more pipes and higher clocks without creating a volcanic GPU. More power to them if they get a reasonable die-size/yield on such a beast, but that hasn't necessarily been their forte in the past.

Dave has hinted here and there that maybe lots o more pipes isn't where ATI is going. . .but it certainly seems they spent a gehenna lot of transistors *somewhere* on R520, and FP32 isn't enuf to explain it. If they didn't spend them on lots more pipes, that might explain how in at least some circumstances a 110nm part could be competitive (or possibly even ahead. . .if you care about 145fps instead of 120fps, and some folks seem to, much to my puzzlement) in some circumstances. But if this is the case, where oh where did the transistors go, and what did it buy us? Significantly better performance in other circumstances more commonly used or at least more forward-looking? Significant IQ improvements? If this is the case, it better be something tangible and easy to communicate/demonstrate/understand. . .
 
geo said:
Dave has hinted here and there that maybe lots o more pipes isn't where ATI is going. . .but it certainly seems they spent a gehenna lot of transistors *somewhere* on R520, and FP32 isn't enuf to explain it. If they didn't spend them on lots more pipes, that might explain how in at least some circumstances a 110nm part could be competitive (or possibly even ahead. . .if you care about 145fps instead of 120fps, and some folks seem to, much to my puzzlement) in some circumstances. But if this is the case, where oh where did the transistors go, and what did it buy us? Significantly better performance in other circumstances more commonly used or at least more forward-looking? Significant IQ improvements? If this is the case, it better be something tangible and easy to communicate/demonstrate/understand. . .

The reported number of transistors in R520 is approximately 150% that found in the NV40: 220M v 330M, NV40 v R520 respectively. Why expect anything but the most obvious, that is has ~150% the number of pipelines? Doesn't 24 pipelines sound reasonable and something that would make sense looking at historical progression? I think it so.

Of course I could be totally wrong and ATI spent a huge trnasistor budget on something we know nothing about, but where is the logic in thinking that would be the case?
 
DegustatoR said:
This dual-core GPU talk is somewhat strange. All today GPUs are dual/quad-core already!

No. A pipeline is not a core. Just because you can equate two things functionally on some level of abstraction does not make them the same thing.

Dual core does make sense, with some qualifications. I would venture a guess that we are really talking about dual GPUs on one package (a la Intel Presler, I believe it is). That is to say, that the two cores are not physically on the same piece of silicon.

This makes sense from a production perspective. You can not expect a pure performance advantage by doing this. The only thing you can hope for is to no lose too much performance by running two GPUs in parallel. However, from the production side you have halved the number of transistors in your design and you can pair them anyway you like. This is in stark contrast to dual core on a single piece of silicon where adjacent cores must both pass QC to give you a single C/GPU to sell. So, whereas you don't gain performance, and probably lose quite a bit, by separating the cores, you allow forward progression without filling your waste bins more than your speed bins.

Look at how it is progressing. First NV40 at 220M transistors was amazing (most trannies in a consumer chip). Now we are expecting a 300M+ R520 from ATI. Building a 500M+ chips without defects is going to be pushing hard on these companies. Also consider how a great majority of transistors in a GPU is logic rather than memory. In a CPU you can easily cover for failing cache, but imagine having to create a 36 pipeline GPU just to ensure you have good yields for 32 functional ones.

Almost forgot the amusing name for this...should Nvidia insist...it's a common breakfast...muSLI. Or maybe just microSLI. SLI on one GPU package.
 
wireframe said:
No. A pipeline is not a core. Just because you can equate two things functionally on some level of abstraction does not make them the same thing.
Right. But pixel quad is.

The only reason to go "multicore" (although i don't know what's "multicore" considering GPUs) with GPUs is if one core begins to be too hot and cannot be cooled properly. Anyway this "multicore" will essentially be some form of multichip rendering. And if the integration will be more complex then this "multicore" GPU suddenly becomes todays GPU as we know it -- with several pixel quads in flight at one tick.

Dual core does make sense, with some qualifications. I would venture a guess that we are really talking about dual GPUs on one package (a la Intel Presler, I believe it is). That is to say, that the two cores are not physically on the same piece of silicon.

This makes sense from a production perspective.
No sense at all. We can disable pixel quads and vertex pipelines on todays GPUs. What kind of production sense are you talking?

Look at how it is progressing. First NV40 at 220M transistors was amazing (most trannies in a consumer chip). Now we are expecting a 300M+ R520 from ATI. Building a 500M+ chips without defects is going to be pushing hard on these companies.
Transistor number is irrelevant. It only means something on a given process. If someday further scaling will become impossible (b/c of leakage and related heat issues) then _multi-GPU_ cards might become an option for further performance increases. But not _multicore_. I repeat - all todays high-end GPUs are multicore already.
 
kemosabe said:
More from the R520 rumour mill. Perhaps Dave has been subtly leading us astray at ATI's request? :devilish:

I think Dave can get suckered without it being a conspiracy (this is a general observation independant of the specific instance under discussion). ATI has been slightly getting on my nerves that way the last year --intentional disinformation as a habit is rather distasteful, in my view. Who do they think they are, the US government? :LOL: I hope they knock it off rather than ramp it up.
 
wireframe said:
Of course I could be totally wrong and ATI spent a huge trnasistor budget on something we know nothing about, but where is the logic in thinking that would be the case?

Damon Runyan once said: "The race may not always be to the swift nor the victory to the strong --but that's how you bet." He would be betting with you rather than my (hopefully clearly labeled) speculation above.
 
RingWraith said:
loekf2 said:
Further, releasing a product in 2005 for WFG 2.0 also makes no sense to me. I will just be DX9.0c compatible, shader model 3.0 (still).

Still? We have yet to see a game that actually fully uses model 3.0. ATi has shown their choice to wait was a prudent one.

From what I've seen on my humble GF6800 there's Splinter Cell Chaos Theory, which has a SM3.0 path. So far Ubisoft denied any news about a 2.0x path, to keep ATI owners happy, but it looks like it still will be in there.

From the screenshots I've seen and my own experiences yes there's a visual difference.

Two weeks ago, Ubisoft released a demo.
 
DegustatoR,

Other than our apparent differing views on what constitutes a core, did you think that Richard Huddy (was it?) was completely joking when he said Nvidia was not counting dies per wafer but wafers per die in regards to NV40 production? Of course he may have been exaggerating and having some fun FUDding, but there seems to have been some truth to this for both companies. They can produce as many 6800s and X800Pros as you want, but it gets tricky at the 6800 Ultra and x800XT level.

I don't see this as applicable when scaling down (disabling pipelines, etc), this would be a way to scale up without incurring the wrath of probabilities in your production.

I tried to make the point that this is not ideal from a performance perspective, but with the growing sizes of GPUs (in terms of transistors) it may be the most (only?) economically viable one to allow forward progress.

If SLI can work at distances measured in centimeters, surely it can work, and work even better, at distances of millimeters. Especially when the package is completely known with very little room for variation from unpredictable sources.

I am not saying it would be great or that I even think this is what ATI and Nvidia are doing. This is simply my take on the speculation in the article. There is probably a huge difference between two packages per video card and two cores per package. There may be plenty of room there to recoup some of the losses in the current form of SLI.
 
The problem with this is that you're assuming that the cost of the interconnect between the two cores would be less than the cost of wasted silicon from one larger core. Considering that for these to work together efficiently they'd need a shared input/output structure, I rather doubt this sort of thing makes any sense.

But, regardless of the economic feasibility of multi-core GPU designs for economics, this just goes to show that Fuad has no f'in clue what he's talking about. After all, this is an entirely different idea than the multi-core ideas on the CPU side, and yet he equates them. So whichever way you look at it, the Inquirer is a shitty place to look for info.

That said, I actually did suggest that it was possible a while back, though I seem to remember being doubtful even then, and even moreso now that I've had time to think about it. I'm willing to be that 99% of the remotely plausible rumors on the Inquirer, furthermore, have already been proposed on these very forums. Hell, they've even copied their rumors straight off these forums in the past. So no, the Inquirer is of no use.
 
Chalnoth said:
The problem with this is that you're assuming that the cost of the interconnect between the two cores would be less than the cost of wasted silicon from one larger core. Considering that for these to work together efficiently they'd need a shared input/output structure, I rather doubt this sort of thing makes any sense.

From what I have seen each NV40 already has the SLI logic needed for I/O and there is no logic allowing SLI anywhere else to be found. So you could already connect two NV40s using this existing interface. You'd just need a dumb bridge, like found atop two PCIe SLI boards.
 
Except SLI doesn't give two GPU's access to the same memory interface or PCI Express bus.

Beyond that, you would not want to run a dual-core chip in SLI mode. It'd be prone to horrible inefficiencies.
 
Back
Top