NVIDIA confirms Next-Gen close to 1TFlop in 4Q07

Don't AMD and IBM do at least some process work together? I believe IBM is quite far along on the stacked RAM side of things (and NV once used IBM to fab stuff, not sure what they do now...)
 
You know IMHO, I think Intel is playing everyone like fools. I feel they have something up their sleeve.

Problem I have with this sentiment is that Intel's recent efforts at new and revolutionary architectures haven't fared very well. For starters they don't seem to be able to resist the temptation to use them as vehicles to rid themselves of that pesky competition thing that's going on in the x86 marketplace.

I don't doubt Intel's technological and engineering prowess, their ability to develop the right product for the right market at the right time on the other hand I'm less convinced about. They don't strike me as a very 'nimble' company.
 
I think that's a fair point: not everything Intel touches turn to gold. Heck, it would not be their first shot at stand-alone graphics.
 
I humbly submit that Intel probably knows what they are doing. They see that they are making a ton of money and dominating the video chipset market with their supercheap OEM-target integrated video designs. They make more money than Nvidia and DAAMIT combined even though they aren't even pretending to produce high-end enthusiast parts for gamers who insist on running everything at 1920x1200 with 16x Anisotropic filtering and 16x MSAA or else it's not playable for them.

Anyways, here's all I really have to say. I apologize for quoting myself but I didn't see any reason to change it at all from what was posted to Rage3D. Obviously you guys know just about infinitely more than I do about this sort of thing so I apologize for any glaring errors and misconceptions from my Rage3D post in advance. Feel free to set me straight on anything you want. :)

http://www.rage3d.com/board/showthread.php?t=33891728

The problem with competitive technology industries like GPUs is that when you fall behind consistently, you tend to keep falling behind and may even fall further behind. It's just not that easy to suddenly reverse course and switch gears, if you have a bad design, you might be stuck with it awhile. Just look at how long Intel was stuck with P4/Netburst, it took them years to follow up with Core 2. Intel was the dominant market leader and had all the OEMs to themselves, so they could afford to be screwed for years on end and have the time for developing a competitive design.

DAAMIT do not have that luxury. ATI was already the distant 2nd place finisher for 3 generations running (6800, 7800, 8800) mainly because they were so late with a competitive part all 3 generations. Nvidia doesn't have any interest in slowing down and waiting for ATI to catch up, so they are happily developing G90 which is rumored to have 192 SPs and be in the 700-800mhz range in clockspeed at 65nm. Nvidia might even stick with GDDR3 if they can get it up to 1200-1300 (2400-2600 DDR) as the new architecture that started with G80 is not particularly memory bandwidth intensive.

ATI is now in a very deep hole. Their part is highly inefficient and provides very poor performance per watt, especially when you consider that G80 is still at the 90nm process node and still runs cooler and uses less power even though it is a bigger die size than R600. Nvidia has delivered a part with a very innovative design that runs the SPs at a much higher clock than the core, resulting in needing fewer SPs and therefore their transition to 65nm will result in a smaller die than ATI's, improving yields further. Nvidia's part has a very efficient render backend and ROP/TMU design that gives essentially free trilinear and anistropic, which ATI's part is struggling with, and at a higher IQ. ATI also need to fix whatever problems they are having with their ROP/TMU design to bring MSAA back on board where it should be, instead of sapping performance by being done in shaders.

Overall Nvidia is in a very good position, as they pretty much hit every performance and IQ goal they had with the G80 design. ATI has a lot of work to do at the drawing board, which will result in further delays and put them further behind. At this point they might not have the R6x0 refresh part out before Nvidia has an entire new generation out in G90, and being a whole generation behind could be a deathblow for DAAMIT as they are also struggling to get Barcelona out and have it be competitive with Penryn.

The big advantage that R600 theoretically had over G80 was the enormous shader transistor budget. G90 is rumored to have a ton of shader power increase over G80 if the 192 SP thing is to be believed, which will negate the only big theoretical advantage R600 had outside of the completely unnecessary and probably expensive excess memory bandwidth. R600 is so much weaker than G80 in basic rendering design such as geometry setup, actual fillrate (even though the 512-bit memory bus and 1024-bit ring bus gives it a massive theoretical advantage), texture filtering, and AA that it's almost unbelievable that DAAMIT would have released it in the state it got released in except that we know that DAAMIT was already 6 months late and had to get something, anything out the door, plus financially DAAMIT is staring into the abyss.

Nvidia designed a powerful, efficient, balanced architecture with G80 and they can easily ride it for the next 2 years just with die shrinks and slapping more SPs onto it and clocking it into the stratosphere. ATI is in a big hole here, not unlike the big hole AMD is staring at with Barcelona versus Penryn, and they need to do something and it needs to be miraculous if they want to catch up with Nvidia now. Being a whole generation behind is the worst thing imaginable in the technology industry.
 
Given that they claim 512Gflops for G80 I would be surprised if that "almost 1TFlop" comment was with regard to only MADD flops. I'm thinking 160 G80 class shaders @ 2Ghz. That'll put you at 0.96TFlop if you count the MUL. Almost 1TFlop of MADDs would be insane given that the fastest G80 right now only manages about 410Gflops.

TheInq has reported that G80 is 330Gflop beast so does it mean G90 will have 3-times more shader performance than G80??
 
Faceless Rebel, I'd tend to agree with much of what you wrote, but it's probably worth pointing out that Intel does NOT make more money out of graphics than AMD and NVIDIA combined. Heck, they don't even make a fraction of what either companies makes, depending on how you look at it.

Intel has 38.7% of the graphics market, while NVIDIA have 28.5% and 21.9% respectively (link). At the same time, Intel's average selling prices for their IGPs are, well... low. In fact, the saying tends to be that it's not worth competing directly against Intel's IGPs because "you can't compete with free".

While that might be slightly exagerated, it is true that the price difference between Intel's IGP-less chipsets and the same ones with IGPs is negligible, if not indeed zero. So either you consider the Intel's entire chipset revenue when they sell IGPs and that seems quite unlikely to match NVIDIA or ATI's businesses, or you consider just the premium over a no-IGP chipset. And then you'd have to argue that Intel has a graphics revenue of about, errr... $0! And I don't even want to calculate what their theoretical gross margins on graphics are then, hehe.

That doesn't mean they don't know what they are doing, of course. It makes a lot of sense for them to sell basically free IGPs. As for their discrete GPU plans, I'm sure they aren't completely clueless - I would tend to agree that Larrabee is just part of their masterplan though, and perhaps not even the part which NVIDIA and AMD should be the most concerned with. I had a little theory regarding Intel's fab teams and IP that would change the dynamics in very interesting ways, but that's not the point, so I'll just shut up for now.

As for R6xx vs G8x, I think you're painting a slightly too negative picture, although I'd still tend to agree with the overall point - it's easy for NVIDIA to tweak their architecture to take on mostly anything ATI could come up with in the next 9+ months or so. R600 was hardly a 3 hours project, unlike a certain other chip *cough*R420*cough*, and I'm sure they can improve efficiency a fair bit and maybe improve the ratios' balance a bit, but that won't give them a completely new architecture. It'll be interesting to see how different R700 is, though. Will it be a slight evolutionary step, or a much larger transition? And R800, is it a complete overhaul, and if so in what timeframe?
 
less than 300M.

But I think, how the codename looks, G92 is based on 6 clusters a 32 SPs.

If this TFLOP meant for MADD I am real amazed... :oops:

I think 64 SP is definitely less than 200M.... Let`s see on G80 vs G84....
G80 has 681M transistors and it has 128SP, 32(??)TMU and 24ROPs.... G84 has 289M and it has 32SP, 8TMU and 8ROPs....

681-289=392M

It means 96SP, 24TMU and 16ROPs cost 392M transistors.... Then i think 300M is for 128SP....
 
Actually, G84->G86 might be a more accurate comparison, because the number of ROPs and the memory bus width did not change there. Other things have been tweaked a bit, certainly, such as cache sizes and misc. architectural traits. But it's certainly the best comparison point we have, anyway.

So, that's a 79M transistors difference for 16 SPs and 8 TMUs (including 8 address operations/clock). Extrapolating that would give us 316M for 64SPs and 32 TMUs, but in the end, that doesn't tell us much unless we know whether G84 and G86's SPs are truly comparable, and what the number of transistors those 32 TMUs are taking.

In the end, what I'm most curious about is what will happen to the ROPs. With a 256-bit memory bus, they would only have 16 ROPs. And that's obviously a tad weak compared to G80, even after taking a potential GX2 into consideration. Maybe making blending full-speed for INT8 and improving the stencil and depth hardware slightly would be enough, though. Or alternatively, another clock domain...
 
Don't AMD and IBM do at least some process work together? I believe IBM is quite far along on the stacked RAM side of things (and NV once used IBM to fab stuff, not sure what they do now...)

What AMD and IBM do doesn't matter as long as the GPUs are fabbed at TSMC.

Nvidia's experience with IBM as a foundry was apparently so great they went back to TSMC.
 
What makes you say that? Because of their first attampt?


Intel's focus seems to be on what GPGPU will do to them more then anything else, its going to be a huge market and might take a chunk out of Intel.

Not saying they aren't interested in the graphics portion, but Intel hasn't been diversifying as of late, more like cutting back and going back to what made Intel, Intel, always have to make sure home base is secure before exploring ;)
 
Btw, re the scheduling and now taking G80 as the "norm" going forward, then we're now assuming they've just permenently slipped their schedule 6 months in two three month add-ons over the course of 2005 and 2006. So NV40 is Spring 2004, G70 is Summer 2005, G80 is Fall 2006. But now they're going to stick firm at Fall, rather than go, say, Winter 2007/8? Of course the other problem with that telling, is it's pretty clear that G80 was intended to be a Summer 2006 product originally. . . Marv said so in late 2005.

It could also be that NV40-G70 was a product series while G80-G92 are the same.
In other words bringing out a G70 wasnt hard because the base designs were similar, same with the G80 and G92. So I wouldnt be surprised if this shows up Q4 07 that we dont see a new design from Nvidia until Q2 09.
 
I also don't see a reason why they should rush and redesign stuff that quickly. After all, G80 is a pretty well balanced solution, and a great starting point for evolutionary tweaking (more SPs, better SPs, etc.) than for a complete overhaul. It's a similar situation to NV40, just like Maintank noted. This time they can possibly ride on the architecture even longer than on NV40.

I think G92 will be more or less a G80 with double-precision and increased shader performance, but I doubt it'll contain anything revolutionary. Maybe next year with G100.
 
Some of AMD's patents with regards to array processor communications and the ring-bus might be indicators that the former ATI has some ground work there.

I heard that one of the main reasons ATI agreed to the acquisition was to use AMD's on die communications technologies.
 
Back
Top