AMD: "[Developers use PhysX only] because they’re paid to do it"

Discussion in 'Graphics and Semiconductor Industry' started by Richard, Mar 9, 2010.

  1. Npl

    Npl
    Veteran

    Joined:
    Dec 19, 2004
    Messages:
    1,905
    Likes Received:
    7
    Yeah, given that Nvidia spends additional money on alot software projects, has way better drivers atleast when it comes to OpenGL and Linux, helps out more and supposedly even "buys" developers to use their stuff I wonder where ATI spends their income.
     
  2. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    In other words AMD is pushing the envelope on nothing. You couldve just said that instead of avoiding the question.

    The point is that geometry power can be used now. We don't need to wait for the future. What are the obstacles you see today for increasing geometric detail that lead you to believe Evergreen's geometry throughput is good enough?

    Ah, so not that easy after all then.

    The fact is that it's in DirectX now and available for use. AMD was praising the benefits of DX11 tessellation as well and pushed for its inclusion in games. Funny how when Nvidia does the same it's another evil plot on their part.
     
  3. Sontin

    Banned

    Joined:
    Dec 9, 2009
    Messages:
    399
    Likes Received:
    0
    The internet is such a funny place:

    http://forums.hexus.net/hexus-net/1...s-nvidia-neglecting-gamers-2.html#post1806514

    Ironic how the world changed since last year.
     
  4. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Tessellation levels are scalable, just like resolution and all the other settings that differentiate cards of different performance levels. I don't know why you're repeating AMD's hysteria. We want developers to increase geometric detail don't we?
     
  5. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    The old "extend and embrace" argument, to justify splitting the market and controlling your own propriety APIs.

    You don't think the products AMD produced nearly a year ago, at their price, performance, heat and energy usage were not pushing the envelope, but Nvidia's bastardised and crippled Fermi is great because it helps Nvidia push it's own APIs and one trick pony advantages?

    I just have to point to the titles on the market. What makes you think that Fermi's heavy tessellation performance, which is only strong on their tiny niche top cards, is going to be of any use in the gaming market for the life of the product? Where are the products that need that performance? And what are you going to say if AMDs tessellation performance goes up with their new generation? Are you going to tell us it's acceptable for Nvidia to use vendor id lockouts like with Batman's AA?

    I didn't say easy, I said "cheap" in terms of transistors and design. Depends on the architecture as to whether going overkill on something because it's easy and you can't do the harder stuff (like making a cool, quiet card that doesn't need the power of the sun and isn't losing the manufacturer money) is viable.

    It's only a plot when they are doing it to hurt the market in their own favour. How long before we see Nvidia write some standard tessellation code for a dev, then lock it out with vendor ID checks, then use lawyers to enforce their copyright on said standard code? Not long I think. How is that good for PC gamers?
     
  6. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    So you're saying that Nvidia's high tessellation performance isn't really that important because lower performance will still give pretty good results?
     
  7. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Correct.

    Where'd you get the idea it's only strong on top end stuff?

    http://www.hardware.fr/articles/801-5/dossier-sparkle-geforce-gts-450s.html

    If your best rebuttal is to come up with evil things Nvidia "might" do that have absolutely nothing to do with tessellation then it's pretty clear you've realized that there's nothing to complain about.

    Like linked above their tessellation performance is still pretty good on lower end cards. But your question is silly. That's like asking if Cypress's high performance isn't important because Redwood exists.
     
  8. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,423
    Likes Received:
    10,316
    Is it balanced? Perhaps not, similar to how ATI in the past misjudged how quickly the industry would adopt high shader throughput for games. Just like Nvidia back in those days, ATI cards are perhaps more balanced with regards to current workloads. Just like ATI's cards back then, Nvidia's Fermi based cards are a bit unbalanced with an eye towards a future where Tesselation loads are higher.

    One isn't necessarily better than the other. I don't mind either approach. Just like I liked the shader heavy approach of ATI because I was looking ahead to what shader heavy programs might provide (as we can see in current games) so too am I looking forward to a day when Tesselation is a large and key factor in every game.

    Will it progress similar to shaders with each generation of games becoming more and more heavily weighted towards shader performance? Hard to tell, but I can say for myself I hope so.

    ATI used to get knocked for being too heavily shader focused, unfairly I may add. Nvidia getting knocked for having too high of a tesselation focus is also unfair. It's a feature I'm sure everyone would like to see more of.

    There are far better things to criticize each company for. I don't think enhancing future looking features is necessarily a bad thing.

    Rather if a company is actively discouraging and hampering adoption of non-proprietary future looking tech (Nvidia with regards to Dx10.1 for example) then feel free to lambast them. Or trying to push a proprietary tech with no intentions of opening it up to everyone (ATI perhaps with tesselation a long time ago, or Nvidia with PhysX) where it actively divides and makes PC gaming less attractive...

    But trying to look forward at where the industry is going? Or where you'd like the industry, as a whole, to go? No, I rather like that. So yeah, I liked the overabundance (and generally underutilized) of shaders in the Radeon x1950 XTX, and I like the overabundance of tesselation power in the GTX 480. Doesn't mean there weren't drawbacks to each card. But I don't think beefing up forward looking features is a waste.

    Regards,
    SB
     
  9. Babel-17

    Veteran

    Joined:
    Apr 24, 2002
    Messages:
    1,073
    Likes Received:
    307
    Well, isn't the thing with tessellation, if it's done properly, that it can offer something for everybody?

    For example, rounding off the barrels of guns. Near perfectly round would be great and something I would like.

    But even some rounding is a good thing.

    I don't understand the technology but I'll go out on a limb and say that if it can't be dialed in then maybe the game's developers don't have a full handle on it.
     
  10. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,451
    Likes Received:
    471
    Fermi will be sufficient for future tessellation workloads, but it will be short of arithmetics, texturing, bandwidth, fillrate... Who will care of tessellation if the highest playable resolution will be (lets say) 1024*768, where the triangles are small enough even without tessellation and tessellation impact on quality will be neglible?
     
  11. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    You know, AMD is not known to go the cheap and easy route.
    And who knows, maybe it wasn't that cheap or that easy either?

    AFAIR, both R200 and NV20 had HOS support, something in the line of RT- and n-Patches respectively.

    WRT tessellation: It's useful, not to be bottlenecked by it and it's double useful if it serves your professional line of cards equally well. Of course, games are not all about tessellation, heck, not even all about graphics either. But in the time of consoles as primary development plattforms for the majority of games, it becomes hard to convince developers to integrate something, which really improves PC gaming - because it costs them money and they see only very limited benefit in return.

    Using more geometry instead of texture tricks is one way to get this done and since the bus systems (apart from the CPUs) in current PCs are not able to handle hundreds of millions of polygons, tessellation for example makes sense. I want a brick wall to look like a brick wall - not only when I'm viewing it from just about the right angle.

    And it's not like AMD wasn't promoting the benefits of tessellation before, isn't it? And after all, their cards have powerful tessellators also which serve to improve the rendered images as well.

    But you're right - it's not only about tessellation.

    As i said above: Engine (and game) development is focused on consoles mostly. That's one large factor with the other one being that a felt 90% of the (installed) market isn't even DX11 ready and so would need to push that geometry through the CPU and bus systems in your average PC.

    It almost sounds as the bolded is trying to say, the same geometric detail looks better at lower screen resolutions just because the triangles have fewer pixels.
     
    #271 CarstenS, Oct 17, 2010
    Last edited by a moderator: Oct 17, 2010
  12. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    So the fact that they've already done it for AA doesn't count? They also lock out Physx when their own customers have the temerity to use a competitor's hardware, and they lock out 120 hz monitors and motherboard SLI with hardware ids unless you've paid them. It shows their modus operandi, and where they are happy to draw the line on acceptable behaviour.

    If they've got devs out writing tessellation code, I'd not be surprised if they are also locking it out with vendor id, because that's what Nvidia as an organisation thinks is reasonable behaviour. Nvidia don't care if it hurts the gaming market or the customer and you'd have to be pretty naive to believe they won't keep on doing the same thing in the future when there's no reason to think they've changed how they operate
     
  13. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Did Nvidia actually lock out something on competitor's hardware which is part of an open spec? Like tessellation on a DX11 titel?

    Antialiasing in Batman:AA was/is a hack not possible by the standards of DX9, AFAIBT. So while it's not "nice" to not open it up, it's at least understandable - just imagine, there'd be some kind of error, running that code on Ati cards: "Nvidia injected viral code in order to cripple competitors' image quality/game performance ZOMG!"

    Physx otoh is their own technology and every company i know of tries to protect their investments. But I am hearing Bullet and Havok are two very capable solutions, one of them even free and almost nearing it's (GPU-accelerated) premiere in an application.
     
  14. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    I'd be surprised to see it actually happen given the present competitive scenario. Few devs will be happy to see their dx11 games get screwed on majority of the install base of dx11 machines.
     
  15. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    I thought you could do MSAA on dx9 systems. MS's dx sdk even has a demo (with sample code and everything) on AA. AFAICS, it's been there for years now.
     
  16. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Right, but AFAIK, something with the UE3 prevents DX9 from generally using that. Or is that only a limitation of DX9 HW?
     
  17. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Loads of games have used UE3 and I am pretty sure the rest use MSAA on dx9 hw.
    If that were true, then no dx9 game would have MSAA.
     
    #277 rpg.314, Oct 17, 2010
    Last edited by a moderator: Oct 17, 2010
  18. Sontin

    Banned

    Joined:
    Dec 9, 2009
    Messages:
    399
    Likes Received:
    0
    You can use AA under the DX9 API, but the UE3 is not capable of it in combination with dx9. That's a specific problem of the engine. Most of the games with the ue3 do not support AA out of the box.
     
  19. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    That is ODD if true.
     
  20. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    That's what I seemed to remember: No MRT+MSAA in DX9 - not just UE3 but an API limitation. Now just looked it up again in google.

    http://www.gamedev.net/community/forums/topic.asp?topic_id=485166
    "In the D3D9 help, in the index, look for "Multiple Render Targets". It says "No antialiasing is supported"."

    http://http.developer.nvidia.com/GPUGems3/gpugems3_ch23.html
    (yes, Evilvidia...)
    "Acquiring Depth in DirectX 9 […] More seriously, MRTs are not compatible with multisample antialiasing (MSAA) in DirectX 9."
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...