bit-tech Richard Huddy Interview - good Read

Discussion in 'Graphics and Semiconductor Industry' started by neliz, Jan 6, 2010.

  1. Lonbjerg

    Newcomer

    Joined:
    Jan 8, 2010
    Messages:
    197
    Likes Received:
    0
    Really?
    I got a PPU in 2006 and most games/demos I compared the CPU vs PPU physics, the CPU physcis ran on a single core?

    I have never seen any novodex/AGEIA demo/game with full multicore utilization?!
     
  2. Rys

    Rys PowerVR
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,156
    Likes Received:
    1,433
    Location:
    Beyond3D HQ
    Just an observation, Sontin, but it's really very disrespectful to call Richard "Fuddy". He's put in front of press to evangelise for ATI, sure, but he's a deeply smart guy who's forgotten more about graphics than anyone calling him Fuddy would ever hope to learn. So when he talks about it and the industry one should listen and pay attention, rather than instantly think it's self serving for him and ATI. Given he used to work for NV, his perspective is somewhat unique and really quite valuable.

    So maybe try reading the interview again without the inbuilt misconception he's acting like a PR person, since you might get something useful from it and contribute something useful in this thread going forward.
     
  3. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    If you spent less money each time yeah but technically that doesn't count as an "upgrade" :)
     
  4. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
  5. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    What do you mean by this? The PhysX SDK for the PC supports multi threading and core scaling. ((Despite what some people would like you to believe)) Who actively removes what? There are "tons" of CPU PhysX titles out there. It just seems the GPU PhysX titles get the most scrutiny because they don't accellerate CPU performance to "Some peoples" liking. Yet for every GPU PhysX title out there. Theres about 12-13 CPU PhysX titles out there.

    Another interesting perspective is there are a ton of CPU PhysX/Havok implementations out there. But do we give the developers alot of grief who only want to implement simple Physic effects into their titles and don't massively code threaded CPU enviroments or for the GPU? The devs are ultimately responsible for how our games get coded and how the performance/features look from there. Some devs will opt for CPU multi threaded Physics. Some will opt for GPU Physics. Some will opt for simple single threaded Physics. They don't neccassarily have to be inclusive of each other. It'll be very rare for a dev to invest heavily into one or the other. When they have one implementation which performs to their liking.
     
    #45 ChrisRay, Jan 15, 2010
    Last edited by a moderator: Jan 15, 2010
  6. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    He means that, in a actively promoted GPU PhysX title, none your mentioned scaling can be witnessed. The Point being that your "tons" of CPU titles simply don't exhibit this behaviour. they do not accelerate physx at all as can be witnessed by, what almost seems to be a framecap in benchmarks: 21fps on ati hardware or nv hardware with edited hardwareID.

    Even CPU loads witnessed in these situations (14% on one core out of 8) do simply not correlate that anything is actually going on, physx or gaming wise. PhysX is ultimately owned by nvidia and there's simply no evidence that they, so far have put any effort (themselves or helping developers) in getting PhysX up in games that also support GPU physx.

    In your last sentence, you seem to indicate developers only develop for one target (i.e. GPU physx) isn't it nvidia's duty to make sure that the GPU workload is properly processed on the CPU when a GPU is not present? Or do developer relations only go so far that they don't want to help developers getting a game to run properly, but only properly when a number of variables are to nv's liking? (hint, intel's recent compiler snafu)
    Since PhysX is proprietary and owned by nvidia, you can't expect others to perform code optimizations on PhysX.
     
  7. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    And? Like I said above. If a developer chooses to code for one or the other and optimally tune for either. You're not likely going to see them spending alot of time with the other. If a game is coded for the GPU it doesn't even necessitate a CPU fallback. Some PhysX titles don't even have them. Its obvious the CPU PhysX enviroments in GPU PhysX titles are included as an after thought. Not something that was optimally coded for. I personally would rather them not even be there.

    This is something I talked to Tony Tamasi about in Vegas. And the answer is no. Unless the developer requests that kind of assistance from Nvidia's PhysX devrel. Nvidia will infact help devs code for a threaded CPU implementation if they request it. Nvidia's PhysX devrel provides assistance for both GPU and CPU PhysX on the PC if the dev has purchased that kind of support. Obviously those using the free SDK build will probably not get the same level of support. However most devs will not bother coding the CPU if they have already coded a PhysX for the GPU or Vice versa. If it was as simple as "Switching back and forth" using a compiler. Then all games would have the option for CPU or GPU PhysX.

    They don't. The reason for why should be obvious. Because it's not that easy. Take Batman Arkham Assylum for example. Nvidia said they invested about 4 man months into the title getting it GPU physX ready and optimal for GPU PhysX. Which they agree is too long. There are obviously improvements that need to be made on the devrel side. But it's still not a simple on off switch. Especially after you have thoroughly coded for one or the other.
     
    #47 ChrisRay, Jan 15, 2010
    Last edited by a moderator: Jan 15, 2010
  8. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    745
    Likes Received:
    39
    Location:
    Copenhagen
    So, you're saying there is no cpu fallback implementation of the gpu-physx api (an implementation that should be trivially multithreaded), so if developers for gpu-physx-api titles want cpu support (at all) they'll have to code it up against yet another api?
     
    #48 Psycho, Jan 15, 2010
    Last edited by a moderator: Jan 15, 2010
  9. Colourless

    Colourless Monochrome wench
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,274
    Likes Received:
    30
    Location:
    Somewhere in outback South Australia
    I'm guessing that the CPU fallback for GPU physx is single threaded and if a developer wants to they could manually multithread it themselves. If that is the case though, it'd be really annoying.
     
    #49 Colourless, Jan 15, 2010
    Last edited by a moderator: Jan 15, 2010
  10. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,810
    Likes Received:
    476
    Is the number of live in developers they get purely a question of the level of support purchased or does NVIDIA vary the level of support based on the importance of the title? (The importance to NVIDIA.)
     
  11. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    They're probably doing what any sane company does, i.e allocating resources to get the best return.
     
  12. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,810
    Likes Received:
    476
    So if the amount of resources (free developers) is not purely a contractual issue we can then conclude it would have cost Rocksteady money to optimize for multithreading (because that would have reduced NVIDIA's return).
     
  13. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    There is no excuse for underutilized cores in a physics API meant to run on a stream processor. Whatever parallel loads you feed to CUDA can be fed to a very simple job queue application running code from a CUDA to CPU compiler. The latter is probably already there.

    Nobody is saying NVidia is breaking the law by doing this. They're just saying NVidia is being a corporate douche for intentionally crippling CPU performance.

    I would love to see some Havok vs. PhysX CPU benchmarks for similar physics simulations. It wouldn't surprise me if there was an order of magnitude difference.
     
  14. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    Its more a question of resources than anything else. Nvidia tries to support all developers if they can. But they don't have unlimited manpower. Its not uncommon for an Nvidia devrel guy to go down to the offices of a company they are helping and basically live there till the work they are doing is done. In the case of Rocksteady. They actually wanted to add more GPU PhysX than what exists in our current games. But they didn't have the time/manpoower to get them implemented.
     
  15. Squilliam

    Squilliam Beyond3d isn't defined yet
    Veteran

    Joined:
    Jan 11, 2008
    Messages:
    3,495
    Likes Received:
    113
    Location:
    New Zealand
    I would like to know if theres an example game released or a game coming soon which implements both GPU PhysX and multithreaded CPU PhysX at the same time. If you have an example handy then it would go a long way to putting this issue to rest.
     
  16. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    None that I'm aware of. Like I said. If dev optimizes for one. Its unlikely they'll spend the resources optimizing for the other. It could happen. But has not happened yet. It all comes down to how satisfied devs are with the implementation of PhysX they have used.
     
  17. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,810
    Likes Received:
    476
    It comes down to a business decision ... a cheap license with free developers (if your game is big enough) or an expensive license and basic support.

    Without proof I'm hesitant to just take your assurances that a developer could so easily multithread the code ... as we saw with the MSAA stuff, source code licenses are not always that flexible to begin with.
     
  18. Demirug

    Veteran

    Joined:
    Dec 8, 2002
    Messages:
    1,326
    Likes Received:
    69
    As we had evaluated PhysX for BattleForge I have some experiences with the SDK. From the SDK point of view using another CPU core for the physic simulation is pretty easy. But it could only work as it should if you can update and start the simulation some time before you need the result. If you need it immediately the fetch will block. Unfortunately there is still much multicore unfriendly engine code out there that could not handle this latency. If you want to run on the GPU you have to face these latency problems, too. This could make it complicate to integrate.

    On the other hand locking PhysX to the GPU is quite simple. There are some functions that work only if you use a GPU context. They are simply not implemented in the CPU version. If someone makes use of these functions the simulation would not run on the CPU before you remove them.
     
  19. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    I thought I was pretty clear it wasn't "easy".
     
  20. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,810
    Likes Received:
    476
    I meant easy as in getting an affordable source code license out of NVIDIA, not development effort.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...