Inquirer spreading R420 info

Discussion in 'Architecture and Products' started by 991060, Apr 16, 2004.

  1. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    ATI has yet to implement F-Buffer support. If ATI finally does implement the F-buffer, we don't currently know whether or not the 9700 will support it. I don't remember any changes to the FSAA algorithm.
     
  2. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,516
    Likes Received:
    24,424
    The tweaks to 4x-AA are minor speed tweeks. FWIR, it takes less of a hit on the R350 than it does on the R300.
     
  3. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    F-buffer was introduced in 9800.
     
  4. Socos

    Newcomer

    Joined:
    Feb 23, 2003
    Messages:
    48
    Likes Received:
    0
    I guess it all depends on where you want to put your resources. Because I don't use Linux I would be pissed if ATI spent all their time developing drivers for it and let the windows drivers lack in features.. I guess the bottom line is if you use linux (all three of you :roll: ) you should buy a Nivida card and quit complaining about ATI's lack of support.

    If things change in the future and everyone starts using Linux I would bet ATI would increase their support.
     
  5. anaqer

    Veteran

    Joined:
    Jan 25, 2004
    Messages:
    1,287
    Likes Received:
    1
    Way back, I believe Anand had some comparisons that showed how the R350 was more efficient clock-for-clock than the R300, especially with AA/AF enabled... others are saying most of Smoothvision 2.1 is actually done in the driver and that the same boost is also available on R300. F-buffer is pretty much dead though ( at least in the consumer market, dunno about professinal applications using it ).
     
  6. Seiko

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    163
    Likes Received:
    0
    Exactly, and that's why as a consumer I felt let down by both IHVs. It may help in the long run as they can obviosuly recoupe R&D costs and if I could trust them for a second I'd think all those earnings would get ploughed striaght back into the next card. Unfortunately I suspect both will send shed loads of cash to the shareholders; great for them but less so for the consumer. This combined with the risk of the next gen parts showing little areas of IQ improvement in areas of FSAA and AF for example means I'm not going to be thrilled. I've already condemned NV40 in my minds eye and as such it won't be getting my custom. As for the R420 well that remains to be seen as well?
    It started to worry me when both IHVs apparently had talks and agreed to slow the refesh cycles. Hmmm, very convenient. ATI then shove the 8500 into the bottom section as opposed to a true DX9 card and the R3x series is tweaked over 18 months. Timelines like these are really going to hurt IMHO unless you can deliver a massive increase across all areas of the 3D cards arena, IQ, features and speed. As we've seen NV40 only delivers on two of the three and I suspect ATI will only deliver on one of the three. It's then a pain to see just how quickly Nvidia can react to bring out a proper refresh part when the chips are down. Just look at the 5700, much better than the 5600, is it a coincidence that Nvidia needed a competing midrange card, but hey, what happened to the idea of an 18 month cycle. If it suits them and the blasted shareholders a new card comes screaming down the line :(
    I know real world economics can't/don't work like this and I'm being extemely one sided and unrealistic to expect a flashy new core every 6-12 months but currently I do think the IHVs (ATI especially) are too concerned with their shareholders than thier consumers and have decided for themselves the pace we all should have to move at.
     
  7. Doomtrooper

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,328
    Likes Received:
    0
    Location:
    Ontario, Canada
    There is so much wrong with that above post I won't even comment, man.
     
  8. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    But if it's not supported by ATI's drivers, how can anybody make use of it?
     
  9. Seiko

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    163
    Likes Received:
    0
    lol, go on DT you know you want to. As I said mind, I'm only having a little rant so don't feel the need to blow a fuse.

    :)
     
  10. Doomtrooper

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,328
    Likes Received:
    0
    Location:
    Ontario, Canada
    I really think people have no clue how technology works, the reason ATI and Nvidia are slowing down is the same reason Intel and AMD have slowed down. All these companies have reached the technology barriers, and unless someone has some technology stolen from AREA 51 a natural slow down is to be expected.
     
  11. jolle

    Newcomer

    Joined:
    Apr 18, 2004
    Messages:
    145
    Likes Received:
    0
    I always thought the F-buffer was a way to run longer shaders...
    which would only make it useful in OpenGL i guess..
    and was impmented on 9800 series..

    Always sounded more like a "profesional 3d apps feature" to me..
    and maybe to some extent counter the rather higher instruction limit
    FX had.. (PR reasons, wouldnt need it in games i assume.)
     
  12. Seiko

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    163
    Likes Received:
    0
    Doom with all due respect don't be a pompus twit! Do you think ATI pushed the barrier that much that they couldn't deliver a genuine DX9 low end part, or that they couldn't have done more with the R360? Did they really need the secrets of Area51 to help them.
    The bottom line as with any company is profit and margins. Although memory may soon hold back the big two in the Mhz race both have plenty of room and a very healthy margin.
     
  13. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    Seiko, what good would DX9 do you in a 9200-level card, besides looking good on paper?

    There's no need for name-calling. You're not respecting anyone by calling them a "pompous twit." And you're not respecting reality when someone can buy a "low-end" DX9 card in a $100 128MB 9600. I'm sorry, but if you're expecting decent speed in new games out of a $50 card, you're mistaken. Besides, anyone who can spend $50 on a video card can save up a few more months and pick up a $100 9600, or can cut out the middle-man and buy a used 4200 for less.
     
  14. FUDie

    Regular

    Joined:
    Sep 25, 2002
    Messages:
    581
    Likes Received:
    34
    No offense here but your viewpoint is extremely childish. Do cars get twice as fast every year? Of course not. Do people feel cheated because cars are not getting twice as fast every year? Of course not.
    What talks were these? You mean some rumored ones? Give me a break.
    Buy what card works for you. Does it matter if it's DX8 or DX9? Not really. If the card meets your needs, who cares?
    Again, your view is very childish. First, the 5700 is the same as the 5600 just on a new process. Changing processes is relatively easy (i.e. doesn't require 18 months or more to do). Complete design of a new chip takes far longer (that's the 18+ months people are referring to).
    Since you obviously have no clue about what it takes to design and build a chip, why don't you just assume that ATI and NVIDIA are doing the best they can?

    -FUDie
     
  15. bdmosky

    Newcomer

    Joined:
    Jul 31, 2002
    Messages:
    178
    Likes Received:
    48
    This is not true. The 5700 is not the same as the 5600. The changes to it's core are similar in nature to the changes made to the core from 5800-->5900.
     
  16. FUDie

    Regular

    Joined:
    Sep 25, 2002
    Messages:
    581
    Likes Received:
    34
    If true, then it's likely that a lot of the same work was copied from the 5900 and that a whole new design wasn't created. Also, we know that a lot of work on the 5900 was done even before the 5800 was released since the 5900 came out so quickly after the 5800. Likely, the 5700 was nearly completed as well. Anyway, my point was that companies don't just "rush a whole new design in 6 months because it will help the shareholder" because it's physically impossible to do so.

    -FUDie
     
  17. hoom

    Veteran

    Joined:
    Sep 23, 2003
    Messages:
    3,264
    Likes Received:
    813
    Well I thought ATI & NV are accelerating not slowing down

    I mean, look at the gap from gf4 to 9700p to 6800u (x800xt if it is 600mhz)
    That ain't a slow down.
     
  18. UPO

    UPO
    Newcomer

    Joined:
    Dec 31, 2003
    Messages:
    58
    Likes Received:
    0
    Location:
    private apartment near ring0
    Maybe ATI will expose F-buffer to make R350 and R360 PS2.0_b compatible?
     
  19. Martin Eddy

    Regular

    Joined:
    Oct 5, 2003
    Messages:
    491
    Likes Received:
    4
    Location:
    Australia,Brisbane
    I'm hoping that's what PS2.b is for and that R420 is indeed PS3.0
     
  20. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    PS 2.0b exposes 512 maximum instructions (the maximum for PS 2.x), 32 registers, and unlimited texture instructions.

    Now, I could see the f-buffer allowing unlimited texture instructions and 512 instructions, but I really don't see it offering 32 registers.

    But more importantly, PS 2.0b does not support unlimited dependency on texture reads. It's still 4 levels of dependency. That is a red flag to me that this is still one pass only, not auto-multipass via the F-buffer.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...