Nvidia G71 - rumours, questions and whatnot

Discussion in 'Pre-release GPU Speculation' started by ToxicTaZ, Dec 4, 2005.

Thread Status:
Not open for further replies.
  1. Headstone

    Newcomer

    Joined:
    Sep 29, 2003
    Messages:
    123
    Likes Received:
    0
    PSs maybe but not PPs. Or at least that seems to be the sentiment overall.
     
  2. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    NVidia's seen how ATI's conservative approach to the previous transition worked out really well for them, and so NVidia assumed that ATI was going to do the opposite at 90nm?

    Either that or NVidia's spies knew R520 was coming first, right back at the beginning of 2004?

    The fact that ATI did actually go with R520 first was a surprise all-round. All my comments also take into account the interval between R520 and RV515 as per the roadmaps from the beginning of 2005. RV515 was indicated as being third, in the August/September timeframe.

    http://www.pcbuyersguide.co.za/showpost.php?p=5922&postcount=1

    I'm sure they are - but there's also performance in the mix when you're looking to fend off your arch-competitor (dual-MADD pipes and higher clocks). And those costs have to be accepted at some point.

    All these counter arguments rely on NVidia waiting until after Christmas to compete with ATI, knowing (for years in advance) that ATI would be due for a 90nm transition sometime during 2005 - bearing in mind that NVidia has access to the same data from TSMC, on 90nm, that ATI has.

    Clearly, ATI handed NVidia two baseball bats to beat it round the head with: delays to 90nm products with concomittant old-SKU inventory problems (too little/too much - impossible situation :lol: ) and a get out of jail free card for NVidia's problem with 90nm, because they already had very attractive products to sell into what became a vacuum for ATI.

    Jawed
     
  3. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,457
    Likes Received:
    200
    Location:
    msk.ru/spb.ru
    That might be true, but if G80 is a new architecture then it is also a foundation to many following NV GPUs in the same way as NV30 was the foundation for NV3x, NV4x and G7x families. In this case the G80 architecture should be future-proof enough to compete not only in the first wave of D3D10 chips but in the second and possibly third waves too. And this kinda contradicts with it being non-unified because it's pretty clear that unified ALUs are more efficient in the long run for D3D10 shaders.

    Why, there is. They didn't need any 90nm low-end SKUs in the Autumn of 2005 because they had pretty good NV44 SKUs there then. I'm pretty sure that NVs "lagging behind" on 90nm low-end front is because of marketing not technical stuff. High-end front however is another story...

    Well that would be a surprise. On the other hand recent NVs history with NV40->G70 transition would suggest that NV isn't aiming for clockspeed increases for high-end part. If we translate this to G70->G71 transition then well it may be true :)

    Though i'd still go for G70@90nm with G80 during Summer.
     
  4. nelg

    Veteran

    Joined:
    Jan 26, 2003
    Messages:
    1,557
    Likes Received:
    42
    Location:
    Toronto
    IIRC Dave was suggesting (hinting) that due to their work on Xenos that this time would be different.
     
  5. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,333
    Likes Received:
    290
    OT/ why don't we have a 24pp / 32pp poll?
     
  6. ERK

    ERK
    Regular

    Joined:
    Mar 31, 2004
    Messages:
    287
    Likes Received:
    10
    Location:
    SoCal
    How long would it take nvidia to finalize clocks and ramp production after seeing performance of the X1600 and X1300? These have not been available for very long.

    Sorry if this was already stated above.
     
  7. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,533
    Likes Received:
    583
    Location:
    New York
    You keep saying this but there's absolutely no compelling reason for Nvidia to have had 90nm parts out there any sooner. I get the feeling you think Nvidia should have 90nm parts out there just for show. In terms of margins and performance the "old" NV4x parts are cleaning up.

    I don't get your reasoning for your claim that Nvidia's 90nm parts are late. The only part from ATi to give Nvidia any grief in a long, long time came out a week ago !
     
  8. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    Just doing some history checking, the g71 showed up in drivers well before the luanch of the 7800 gtx 512, everyone thought it was originally the 7800 Ultra (or the 7800 gtx 512). I'm thinking nV had ample time to make changes on this chip. Or the g71 name was shoved in there to throw everyone off. If we remember back, when the orginal pic of the massive cooler first showed up thats what the Chinese site thought the g71 was.
     
    #868 Razor1, Feb 1, 2006
    Last edited by a moderator: Feb 1, 2006
  9. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    I'm also dropping off the 32pp band wagon after reading the last couple of pages. Looks like a 24pp 700MHz G70 (on 90nm) would be quite competitive to R580 in most games.
     
  10. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    True. But why stop there?
     
  11. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    G80?
     
  12. walterman

    Newcomer

    Joined:
    Dec 27, 2005
    Messages:
    8
    Likes Received:
    0
    From a new website for me called Xpentor:

    :-?
     
  13. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,696
    Likes Received:
    2,496
    Location:
    Finland
    Ever since there was talk about dualcore CPU's on home markets (A64's & Pentiums), some people who don't really know much, if any, have been trying to say "what about dualcore GPU's?" - and that falls to same category in my opinion - essentially the damn cores we're using right now, and the previous ones and the ones before those and... ... .. could be seen as multicore-GPU's, right?
     
    #873 Kaotik, Feb 1, 2006
    Last edited by a moderator: Feb 1, 2006
  14. SugarCoat

    Veteran

    Joined:
    Jul 17, 2005
    Messages:
    2,091
    Likes Received:
    52
    Location:
    State of Illusionism

    yes, a quad = a core

    i wasnt aware the entire R5XX core line up was bugged architecture, Nvidia better get going then cause that bugged architecture is beating what they have to use as competition.

    Although i would like to see a high end dual G80 single PCB card designed by Nvidia, and then laugh at its power consumption and heat output and not to mention cost.
     
    #874 SugarCoat, Feb 1, 2006
    Last edited by a moderator: Feb 1, 2006
  15. Fox5

    Veteran

    Joined:
    Mar 22, 2002
    Messages:
    3,674
    Likes Received:
    5
    I wouldn't say it equals a core, but graphics are already highly parrelized that multicore doesn't add much. Besides, are dual core cpus the right approach? I'd imagine not everything needs to be duplicated, though it's probably the easiest with current cpu instruction sets, multicpu support already allows the programmer to explicitly split up code but I'm sure there's a more efficient way to do it in hardware than duplicating absolutely everything. (hyperthreading is still only single core, but the cpu benefits from explicitly dividing the code)
     
  16. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Rather, going multicore would likely decrease performance. Though IHV's might save some money through improved yields by piecing up the die to be cut and pasted into separate packages, it's going to become harder to keep all those pipelines fed (as it won't be as easy to share data between the two cores as it is to share data within one larger core).
     
  17. Farid

    Farid Artist formely known as Vysez
    Veteran Subscriber

    Joined:
    Mar 22, 2004
    Messages:
    3,844
    Likes Received:
    108
    Location:
    Paris, France
    Does that even make sense?

    If they mean a real Dual Core, as in two core in one die, what would be the point, transistors/yield wise this doesn't make sense, seeing how parallelizable GPU architectures are.
    Why not just bump the number of ALUs/ROPs/Shaders accordingly and where it make sense to do it, instead of going for a "double everything" solution?

    And if they misworded and actually meant two physhical GPUs on every boards, then it would make sense from a transistor budget and yield point of view, but the 3DFX Voodoo 5, the Volari V8 XGi, etc... May have predated the G80 (Or NV50) by a few years.
     
  18. rwolf

    rwolf Rock Star
    Regular

    Joined:
    Oct 25, 2002
    Messages:
    968
    Likes Received:
    54
    Location:
    Canada
    Unless the second chip is cache and not another GPU.
     
  19. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    But then your performance between this cache and the GPU is going to be degraded from what it would be if it were on the same core.

    Of course, this will still be a viable solution in the case where the "cache" is some sort of DRAM, as there are apparently still fabrication difficulties in getting full-performance chips with embedded DRAM (such as the Xenos or some of nVidia's notebook parts which have on-package memory).
     
  20. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,429
    Likes Received:
    181
    Location:
    Chania
    If any sort of embedded memory on a PC standalone sollution is meant so serve as a cache, then I'd consider it silly too to have it on some sort of daughter die (unless I'm missing something).

    eDRAM on Xenos is more than just a cache and less than a complete framebuffer IMHO. However it's a console chip in a UMA architecture and that's a totally different beast. Under that reasoning eDRAM does not for the time being make any sense for PC standalone products, because the ram quantity required would be large, meaning that it would increase transistor count quite a lot.

    A birdy told me that it has seen two experimental layouts for future hypothetical chips; one was a USC and the other one a DR. And before anyone pulls up his skirt too early, remember I said experimental. As for the latter if such a thing would ever made it into their designs it's not absolutely necessary to defer everything too ;)
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...