AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
http://www.brightsideofnews.com/new...to-improve-40nm---or-atinvidia-are-gone!.aspx

Hot on the heels of stories from last week, it looks like TSMC is in serious trouble with two of its key customers. According to information we have, both ATI and nVidia are "seriously disappointed" at Taiwanese Semiconductor Manufacturing Company. I had couple of conversations this weekend with people that are in the know on the subject, and learned that the situation is looking gloomier than first news lead us to believe. Leaking issues are a standard manufacturing problem that can happen to anyone, including Intel [does anyone remembers Prescott e.g. PressHot].
Thing I don't get is HD4770 doesn't seem to have leakage problems. It overclocks reasonably well.

Jawed
 
The base 40G leakage is lower than 65G. AFAIK, the problem is SiGe straining. All FWIW...
(hey, why does this remind me of 130nm and low-k! oh, right...)

Also, could this whole "omg, it's really a 45nm process!" bullshit just stop? 45LP exists, it's there, it's in production, and it has a SRAM cell size of 0.299 versus 0.243 for 40LP and 40G. The fact TSMC was initially retarded and considered naming a 40nm process as 45G (not the other way around!) is another problem entirely. All processes at the same node are different and you shouldn't expect the same density or performance out of every manufacturer. For how long has Moore's Law lasted now? And the press, including some writers for the 'leading' semiconductor online publicatons, still hasn't figured that out? Sigh!
 
NV should probably make a true workstation-only chip this time with GT300 series, if they are to keep up with the high-end professional market domination, and not to bin an expensive ASIC as an overpriced/overengineered gamer SKU, like GTX280. Then a low grade variant of the architecture should go for a kind of sweet-spot targeting on the consumer space, to rival the flagship RV8xx part.

Two different chips would cost a hell of a lot more and not that optimal during those rather hard times. Of course is it a highly interesting theory, but if they have thought of something along that line I doubt the desktop part wouldn't be a single high end chip.
 
Are there any Tesla variants that use anything but the highest end configuration of a given chip? It could very well be the case that Tesla shares the same chip with high-end Geforce and Quadro but lower end desktop and workstation parts cut out some of the Tesla bits (like double precision), depending on implementation. If they stick with the ugly tacked on approach of GT200 then they could probably just scale back the number of DP units.
 
I remember nvidia expressly saying they won't make special quadro or tesla chips, and they backed from restricting the use of DP units on GT200 to the professionnal lines.

GT300 is something new, an article said its units could process either SP or DP, so we'll see. I expect GT215, 216 and 218 to physically lack the DP units.
 
GT300 is something new, an article said its units could process either SP or DP, so we'll see. I expect GT215, 216 and 218 to physically lack the DP units.

I don't even expect that all those chips you've mentioned ar GT200 based, let alone feature DP.
 
GT300 is something new, an article said its units could process either SP or DP, so we'll see. I expect GT215, 216 and 218 to physically lack the DP units.

As time goes by I almost get the feeling we'll never see GT21x parts and instead they'll just skip right to GT31x parts.

Are GT21x parts ever going to come out? Too much longer and they'll be releasing around a year after GT200 was launched.

Just horrible execution on GT21x.

Regards,
SB
 
I think GT21x are the first 40nm nVidia chips, aimed at the notebook market. Whether we'll see them in desktop is anyone's guess. I've heard that the 40nm process allowed nVidia to lower power consumption, but yields are low so it's cheaper to make desktop parts on 55nm. I personally think that for nVidia, 40nm has no benefit over 55nm, since they don't support GDDR5 and the chips would be severely bandwidth starved. Mobile parts run on lower clocks, so the problem is not that significant.
 
The games bundled with the Voodoos were probably Glide games [wink]
And Voodoo4 series only had 1 chip cards (V4 4500 was the only one released, but there was prototypes for additional V4 models), V5 series had 2 chips (V5 5500) and 4chip (V5 6000) but the 4chip model never got to markets, there's several different revisions of it with different bridge chips etc out there, and drivers were especially back in those days something that didn't properly support V5 6k.

So never see market daylight. I really don't now how he get it but he had it. So it was V5 series you say, that monster with 4 chips? And as you say there was a lot of fuss to get drivers to work at all. But it was dx6 and it's too weird that it had some proprietary games that worked only on specific card. I know marketing. Anyway c-r-a-p.
 
So it was V5 series you say, that monster with 4 chips?
There are even other 3Dfx-based boards with 4 (or more) chips, but mainly for professional segment, simulators etc. Majority of them is based on V1, V2, V3 and VSA-100 chips. E.g. Mercury system consists of 8 V2 chipsets (24 3Dfx chips total) or AAlchemy 8164 is single-board VSA-100 based 8-way SLI system :)
 
The question is -- whose execution?

Well, considering AMD now has multiple derivatives of Rv770 despite difficulties with 40 nm at TSMC, I'd have to put that squarely in Nvidia's lap.

Of course, if you wish to shift the blame, then you could also say that the bad execution on R600 wasn't ATI's fault, but I'd find it hard to find many people that would say that. :p

Regards,
SB
 
Is GT200 overengineered? I think it's only enhanced G80 with modified TF:TA ratio and some DP units... I'd say it's reather oversized, not overengineered.

Do we have wiki for all that abbreviations around here, we certainlz should have for newbies like me :D TF-texfiltering, TA-texadd, DP-dual precision??
All texturing on nV is inside one (of 8 g80/of 10 gt200) cores, but most of g80/gt200 and nextgen failure, as i see, lie in same half number of gen math MADD as the other half is SFU inside thatcore.

So it's 8+8 on g80 ->12+12 inside gt200 -> and probably as it now seems 16+16 on gX300 generation chips. And above all that all the texturing is done "inside specific core", while they now try to somehow weirdly distribute that texturing job on gX300 and simplify data cruching with quad 32-bit packages, from hex package pull from (store in) memory/cache. But all texturing is still generally oriented to the same core, with as i see totally weird way of quasi-redistributed texture filtering as it should be done for real @dx11, with independent TMUs.

And 16 pumped cores with 16MATH+16SFU it's simple overkill for any graphics from ATi's dx10 approach of 64gen MATH +16"SFU" and probable 96+32"SFU" @dx11. ATi's approach is lighter and with more chce of full utilisation than that only theoretical throughput on nvidia cards only visible in some special case benchmarking or in CUDA applications.

___
Woow, i get the edit button. Tnx mods.
 
And 16 pumped cores with 16MATH+16SFU it's simple overkill for any graphics from ATi's dx10 approach of 64gen MATH +16"SFU" and probable 96+32"SFU" @dx11.

It's doubtful that Nvidia will keep the same 4:1 MADD/SFU ratio. And 96+32 for Ati? You're proposing 6:2 MADD/SFU? That really doesn't make sense from a utilization perspective.
 
As time goes by I almost get the feeling we'll never see GT21x parts and instead they'll just skip right to GT31x parts.
Is there actually a market for a GT21x part?
Maybe if GT200 had bought DX10.1 but they already have low end DX 10 models out there.
I'd say 55nm G92b + their existing low end models (65nm must be pretty cheap & high yielding by now regardless of the bigger dies?) are adequate.

NV thus gets to save engineering resource for a bigger push on the next gen.
 
Back
Top