Dave Baumann Saves the Radeon HD 4850 !

Discussion in '3D Hardware, Software & Output Devices' started by thehulk, Dec 3, 2008.

  1. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    IMO I felt that ATI always borked their mainstream part by a little too much, thanks to Dave 48xx didnt suffer and I hope that they carry this trend forward.
     
  2. Gunhead

    Regular

    Joined:
    Mar 13, 2002
    Messages:
    355
    Likes Received:
    0
    Location:
    a vertex
    I agree, the speed hit and the consequent need to lower the rez much negated that feature for new games. (It was very dandy for older favourite titles with a fixed top rez.) Likewise the late arrival was an important factor, they seemed to have some real bumps on the road with Napalm (wasn't there a recall or at least a cancellation of some sort?) and yet diverted precious resources to the feature-creeping Rampage. The DX7 TnL buzz was full on when V5 launched, nevermind how useful that feature actually was back then.

    One thing I can't understand in retrospect, why didn't 3dfx use V2's well working SLI solution for V3? Another PCI-based card to throw in with a bridge or cable interconnect, the combo would have completely owned TNT2U/G400/Rage Pro in sheer speed (if not precision and image quality) and given some real opposition to even Geforce DDR. I'm sure it would have made the V3 a more desirable product and prolonged its life -- at the crucial pre-V5 time when they were already bleeding money badly and needed more to make Rampage finally happen (or just see the Nvidia lawsuit through -- they had good prospects of a victory there, I seem to recall from the discussions then).

    But they had the coolest roadmap ever. Rampage, Sage, FEAR, Fusion, Mojo... much better than "NV20" or "R100". xD

    Edit: Obviously they didn't have a Dave! (Or did they? I have a vague memory of somebody of B3D staff working at 3Dfx...)
     
    #102 Gunhead, Dec 12, 2008
    Last edited by a moderator: Dec 12, 2008
  3. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    and the chip on the 2nd board wouldnt of needed all the 2d stuff
     
  4. Gunhead

    Regular

    Joined:
    Mar 13, 2002
    Messages:
    355
    Likes Received:
    0
    Location:
    a vertex
    True, but I guess the 2D unit was already a fairly small portion of the silicon, so maybe it didn't matter that much. (Or they could have supported and marketed it as a dual-head rig for 2D use or something.) I'm sure hardcore gamers would have paid effectively double the price for double the 3D performance (like many did with V2), even with an unnecessary 2D unit there in a dual card V3 combo.

    BTW, I believe V5 had two identical VSA-100 chips, so they kinda went down that road ("wasting" one 2D unit) in the end.

    Water under the bridge now, of course. It just seems a shame that they didn't leverage tech that they had already developed and used successfully. Matrox used to have PCI cards you could add for multi-head, but they didn't have any system like SLI for boosting 3D performance with that.
     
  5. John Reynolds

    John Reynolds Ecce homo
    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    4,491
    Likes Received:
    267
    Location:
    Westeros
    Dave Barron, the site's founder, worked for 3dfx for a few months and I believe Kristof Beets had been hired and was in the process of moving to the USA from Europe when the company went under.
     
  6. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    So they did have a dave :D
     
  7. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,426
    Likes Received:
    10,320
    I think it had far more to do with forgoing the high end than you believe. Basically being forced to adhere to a strict size and power limitation put a lot of pressure on the engineers to work (what ended up being) a virtual miracle. People being people, had there not been the restriction of size, if they had need more from a certain area they would probably have just added more rather than completely reworking things. And I think had this been the case, the end product wouldn't have been nearly as good as Rv770 ended up.

    Also, you neglect the fact that not only did ATI's mainstream/performance part drag down the midrange pricing of Nvidia, but it also forced Nvidia to lower their enthusiast pricing. With performance close enough to enthusiast levels at a price significantly lower, Nvidia was having a hard time attracting any buyers to the GTX 280. So not only did it "reset the mid-highend back" it also reset the enthusiast level back by making the GTX 280 such an unattractive proposition at the original price point.

    As it turned out, 4870 ended up a larger win than it would otherwise have been due to their focusing on a small size in conjunction with Nvidia continuing to pursue (relatively) huge monolithic chips.

    ATI is pulling in good margins and at the same time forcing Nvidia into a position of having to pull in minimal margins in order to compete.

    I truly believe this would not have been the case had ATI's engineers been giving full rein yet again to build a massive monolithic chip.

    Of course, as always this is only my opinion. :)

    Regards,
    SB
     
  8. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    Very nice article - reminds me of the good old days of Anandtech.

    The article didn't really tackle the X2 configuration. I suspect that the CrossFire Sideport was a feature tacked-on late in the design cycle and that RV770 was not originally "aimed at performance with X2 to cater for enthusiast".

    I'm suspicious that part of the "20% uplift in ALUs/TUs" that RV770 benefitted from is due solely to the addition of the CrossFire Sideport.

    Against this is the idea that the engineers "over-achieved" in shrinking the functional units of R600 (and re-configuring them, e.g. making the TUs 8-bit, not 16-bit) - we don't know how big RV770 was budgeted for back at the beginning of the project, e.g. it could have been 300mm2...

    Anyway, I still think that the clearest indication of the success and continuation of this approach is the fact that three chips, RV710, RV730 and RV770 were all available to buy within a quarter-long release schedule. That rapidity and the marketing impact it makes seem to me to be at least as important as the whole VFM ballyhoo that HD4850/HD4870 created.

    Jawed
     
  9. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    Would the implication of what you are saying be that if the sideport weren't there, the die could have been smaller and not had the floor space for the extra SIMDs?

    Everything else in the chip seems to fit the exact layout well, sideport or not.
    Two edges of the die are pretty well packed. So omitting or adding the crossfire sideport late in the game would have done nothing so long as those sides weren't changed.

    Would anyone with knowledge of the process enlighten me as to when the engineers would decide something like the physical layout of the interfaces?

    I suppose we could try to play Tetris with cutouts of the RV770 die shot to see how else they could have managed it.
     
  10. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    If you look at the "plan" here:

    http://www.techreport.com/articles.x/14990/2

    and compare that with the die shot that's just above it, you can see that the CrossFire Sideport (I'm guessing the intense blue, very thin strip up the right hand side of the die, shown as part of the orange area in the plan) is about 2/3 as long as the PCI Express interface. That increase in perimeter, alone, should have had a marked effect on die size (I calculate an 8.7% increase in perimeter and a 20% areal increase, assuming that without the Sideport it'd be a square die).

    The Sideport connects to the Hub:

    http://www.techreport.com/articles.x/14990/1

    so gives the appearance of being "easy to bolt on". The fact that the sideport actually appears to be a prototype unit which seems in the end to have no use, arguably strengthens the idea that it was bolted on.

    As for die-tetris, the other issue is simply not knowing how much of the orange area is consumed by the logic supporting this interface as well as the interface itself.

    Jawed
     
  11. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    746
    Likes Received:
    41
    Location:
    Copenhagen
    I guess he meant that the sideport mainly takes up extra pins/pads, not logic space, and as earlier told they added the extra SIMDs because they were pad limited otherwise. (seems like you were talking a bit past each other).
     
  12. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    My question would be whether or not the hardware on the other sections of the perimeter could be shifted enough to eliminate the dead space a missing sideport would have left.

    The GDDR5, PCI-E, and and the UVD and display controller sections would have to move around, and I'm curious to know how far in advance their placement would have to be decided.

    If those sections were not free enough to move sufficiently, then the die perimeter would not shrink or grow by the full length of the sideport.
    Perhaps the sideport is a later add-on, but how much did it actually displace?
     
  13. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    GDDR interface would be relatively unaffected I reckon since it covers the majority of the perimeter already - and sections of it are localised to RBE/MC units. The other blocks are all routed via the hub, so it's really a question of whether there are any particular hub-related constraints. Removing the CrossFire Sideport prolly relaxes those constraints.

    But I agree, there's no way to tell how much area would be saved by omitting the sideport. The 20% estimate I gave earlier (for the die as a whole) is considerably more than the growth of 25% that affected the ALUs/TUs alone, from 8 clusters to 10 - not 20% growth as I mentioned earlier :oops: ).

    Jawed
     
  14. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    Its been several weeks now since the hd4000 has been out why is there no official "B3D Dave Bauman shrine" ?
     
  15. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    The plans were cancelled after the Avivo-disaster :)
     
  16. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    That's a very reasonable explanation, but if it is true then I'd be very dissapointed with ATI's engineering team. Due to graphics being embarrasingly parallel and scaling so easily across market segments, making your functional units as small as possible is the number one factor influencing the bottom line for ATI and NVidia.

    You can also look at RV670. That had a low size requirement, but why wasn't it faster? Using HD4xxx series technology they could have easily fit 480SPs/24TMUs in there along with the faster ROPs. However, the R6xx technology was not engineered very well.

    How did I neglect that? I very explicitly mentioned that ATI wanted to win marketshare back, which is why they priced it low, and of course NVidia had to follow suit. This is completely orthogonal to the omission of a high end chip. If ATI released a 350 mm2, 384-bit chip a few months later, it would only help more in suppressing NVidia's pricing because it would have clobbered the GTX 280.

    No, it was due to ATI's focus on small architectural unit size, not maximum chip size. That focus should have been there for every chip ever made by ATI/NVidia and probably was. They just did a much better job at engineering this time.

    Omission of the high end did nothing good for ATI. As I mentioned before, look at RV730. ATI focussed on bringing their performance parts out first, but did that hurt their mainstream parts? Heck no. RV730 blows away every other 128-bit part by a huge margin.
     
  17. RobertR1

    RobertR1 Pro
    Legend

    Joined:
    Nov 2, 2005
    Messages:
    5,852
    Likes Received:
    1,297
    Clearly late to the party but well done Dave. Keep the trend going and more importantly, see if you can get those pesky dev's to optimize to Ati hardware also.

    It's one area I've always felt ATI has lacked in the past. They had great features and tech but failed to get the devs on board to properly implement the feature set.
     
  18. Wesker

    Regular

    Joined:
    May 3, 2008
    Messages:
    299
    Likes Received:
    186
    Location:
    Oxford, UK
    That just reminded me about ATI's tesselator. Weren't there a few games scheduled to make use of DX10/10.1 as well as the tesselator on ATI R6xx/R7xx cards?
     
  19. Cookie Monster

    Newcomer

    Joined:
    Sep 12, 2008
    Messages:
    167
    Likes Received:
    8
    Location:
    Down Under
    This is one of the key examples of ATi failing to really push a feature to the market. He just reminded me of this forgotten feature as well!

    Just compare this to the PhysX post-Ageia acquisition. The laughable and vaporware feature, is now becoming a well known proprietary standard for "physics" in the 3d computing world set out by nVIDIA, where it has been getting alot of momentum due to support shown by developers like THQ, EA, 2Kgames etc (albeit with the help of the rather impressive nVIDIA noise machine). Its also sparking alot of heated dicussions across tech forums. Now this is rather impressive.

    I dont even remember the last time AMD/ATi mentioned about the tesselator.
     
  20. Spyhawk

    Newcomer

    Joined:
    Oct 31, 2007
    Messages:
    76
    Likes Received:
    1
    Nvidia has always had more mindshare/marketshare in the 3dworld, even when ATI had the superior tech, so tell me, how is ATI to push its tech further if Devs prefer to go NVs way due the thier larger market.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...