Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Discussion in 'Console Technology' started by vipa899, Aug 18, 2018.

Thread Status:
Not open for further replies.
  1. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,156
    Likes Received:
    5,090
    Mostly because it's free for developers to use and it's "good enough". If the developer can afford it they'll generally switch to Havok.

    Regards,
    SB
     
  2. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    I highly doubt Nvidia spent 150 million dollars on PhysX just to artificially inflate a Futuremark score. That seems like an obviously poor business decision. ;)

    PhysX also supported multithreading and SIMD on consoles (360/PS3) way before Tegra/PC iirc. I don't understand where your conspiracy theories are coming from. Could Nvidia put more resources in PhysX and perhaps sped up development time? Of course they could have! But I could easily see the business case not being there. Remember, Aegia wrote the single-threaded x87 code not Nvidia. Nvidia's only "crime" is not updating PhysX at a rate you arbitrary deem acceptable.

    Finally your connection back to DXR doesn't make any sense. DXR is a standard defined by Microsoft. How can Nvidia prevent raytracing from becoming widespread by adopting an industry standard api?!? How is DXR any different than "vanilla DX" in this regard?
     
    DavidGraham and pharma like this.
  3. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,904
    Likes Received:
    6,187
    I don't have the game so I can't comment. DXR has a baseline to fall on compute if there is no hardware. In another thread, there is Xbox One X using DXR to raytrace. It has no hardware.
    I don't know if nvidia would 'force' a user to own a 20xx series gpu to enable RT.
    I'm willing to think that is not the case, since nvidia partnered with MS to translate RTX commands through DXR, nvidia will need to release drivers that should fall back to compute or they are clearly not following protocol. And it would be imo, a mistake. You want people to enable the RT version to see what it looks like. And associate that image with higher FPS/resolution on better newer hardware.
     
  4. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,734
    Likes Received:
    11,208
    Location:
    Under my bridge
    They bought PhysX to sell PhysX-enabled hardware, no? The expectation being that people would spend more for physics-acceleration and nVidia would make plenty more than $150 million. What's the business decision in a GPU manufacturer buying and maintaining a physics engine if that's not the case?

    Again, to sell hardware. I guess the idea was that games would use PhysX accelerated on their hardware, and fall back to a CPU solution. The crap CPU implementation may have just been lack of investment. Supporting PhysX on consoles means game devs would use it. Providing suboptimal solutions on PC would encourage PC gamers to buy PhysX hardware for the PC versions of the same games. As the consoles did not impact the PC market, there was no need to hobble PhysX; indeed fast console performance would be needed to encourage devs to use PhysX and generate a need/desire for PhysX hardware on PC.

    they wanted physics to be astronomically faster on their hardware to help shift hardware. They had no incentive to improve non-proprietary-solution performance, so didn't do much about accelerating PhysX until it became apparent people wouldn't spend money on physics hardware.
     
  5. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    Correct, but that's not the conspiracy theory that was put forward.

    In general I think you are fundamentally misunderstanding the gpu acceleration portion of PhysX. Only a small portion of PhysX could actually be accelerated by the gpu (can't remember if that applied to the ppu too). Most of the "unoptimized code" for PhysX still "existed" even when paired with a Nvidia gpu. There's no "fallback solution". So I know it's easy to say "well Nvidia simply was trying to stack the deck in their favor" (and I agree it's clear that Nvidia was in no rush to add large changes to PhysX), but perhaps there's a little bit more to this story. :smile: Writing highly optimized physics middleware that's supported on many platforms is not some trivial endeavor. :wink:
     
    #325 willardjuice, Sep 12, 2018
    Last edited: Sep 12, 2018
    DavidGraham, Shifty Geezer and pharma like this.
  6. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    Source? Dice said they've optimized for Nvidia hardware since it's the only thing available but what exactly is a proprietary DXR path? Once AMD has a DXR driver are you suggesting developers will ignore it and explicitly look for Nvidia hardware - what evidence do you have to support that theory?

    There's no parallel with PhysX here because obviously there's no equivalent Microsoft api.
     
  7. OCASM

    Regular Newcomer

    Joined:
    Nov 12, 2016
    Messages:
    922
    Likes Received:
    881
    Here's one of the PhysX coders giving the mundane, non-conspiratorial version of the story:

    http://www.codercorner.com/blog/?p=1129

    Also his thoughts on how PhysX is different from RTX:

    http://www.codercorner.com/blog/?p=2013
     
    Shifty Geezer and iroboto like this.
  8. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,723
    Likes Received:
    193
    Location:
    Stateless
    I do believe that MSFT backing that breaktrhough with an API is what will actually impact the next generation of consoles, AMD will have to watch-up somewhat.

    The question is more about the extend to which they can catch-up. Their situation is pretty akin to their situation in the CPU realm against Intel.
    AMD has not catch-up with Intel, they offer now legit CPU option though they do so with significantly lower margins.
    they still can't compete on volume as Jen-Hsun once stated, Intel is not threatened in workstation or servers. As for their part of the addressable personal PC /windows market, they have a lead in power consumption in the era of laptop. I believe Intel is over-reacting by raising lot of its low-end offering to 4 cores. Imho they have core i3 i5 and should match the number of cores to the i number. They have to keep their margins hight, factories are expensive.
    Back to Nvidia, they have a real business plan, they do not subsidize, work for a fame they already have, etc. I would take a lot of money o get them to something significant, may even more so than Intel which may have discrete GPU to promote soon.
    I wonder if we will see Nvidia back in the console realm anytime soon. I'm not sure they even have something that could match Nintendo needs as they ove to higher and higher power consumption for their SOC but also usage that more and more about computation and no graphics (more like mini super-computer than mini gaming PC).

    I believe Sony will stick to AMD, MSFT should make a deal with Intel or at least try. As for Nvidia its new gpu could hurt AMD if they offer a significant jump in mining prowess. Nviia is extending its technologiccal lead onn them and it is in part hidden by nowadays mining frenzy.
    It seems that ray tracing some part of the pipeline is getting more efficient that cheating you way through it, it might be so for every GPU manufacturers, how the different manufacturers will succed this transition is to be. It goes much further than console business. Nvidia is taking the first step, will PowerVR leverage their prior efforts? What has AMD in stock? etc
     
    #328 liolio, Sep 13, 2018
    Last edited: Sep 13, 2018
  9. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,788
    Likes Received:
    2,592
    According to one the founders of PhysX himself it's just a case of inexperience at the beginning.

    Once PhysX got famous and widely adopted, no body asked them for SSE version.
    They even experimented with the SSE code, and found the performance uplift to be not worth the cost vs benefit ratio.
    After a complete write up in PhysX3 SMID was implemented.
    In the end he dismisses all the crap about crippling PhysX vehemently.
     
  10. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,734
    Likes Received:
    11,208
    Location:
    Under my bridge
    Already referenced by OCASM. ;)
     
  11. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,996
    Likes Received:
    4,570
    It would be an obviously poor business decision and as far as I can see you're the only one in this thread who suggested nvidia bought AGEIA just to inflate 3dmark scores.
    I don't think nVidia bought AGEIA for that reason alone. I can't see any post of mine that suggests something like that.

    I laid out the timings for the acquisition -> 3dmark Vantage release -> first PhysX driver release -> G92b release -> futuremark invalidating GPU PhysX -> David Kanter's study -> PhysX 3 release with SSE+Multithreading
    The dates of the events are accurate and you can check them out by yourself. Then you can take your own conclusions.

    You're free to believe everything was just a happy coincidence for nvidia. You're free to believe nvidia purchased AGEIA in February but wasn't aware that PhysX would be used in Vantage ~2 months later, and call a conspiracy theorist to anyone who thinks otherwise..
    ¯\_(ツ)_/¯



    From all the content we've seen so far, I'm honestly convinced that Shadow of the Tomb Raider and Battlefield V will have the "nVidia RTX" toggle in the settings, which you can only activate if you have a Geforce RTX card.
    If instead it has a "DXR" toggle, then I'm wrong.

    I hope I'm wrong because I'd rather see raytracing flourish for the masses than for these games to have raytracing working only with nvidia hardware.

    Shadow of the Tomb Raider releases tomorrow, so perhaps we'll see how the raytracing toggle works very soon.


    None of what is in my post, or in David Kanter's article, is invalidated by those statements.

    The statement was never that nVidia went out of their way to sabotage the x86 performance of PhysX 2.x.
    It's that nVidia focused solely on using PhysX as a unique selling point for new cards, even to the detriment of their older cards. They didn't buy AGEIA so that all games could have cool real-time physics effects.
    Proof of that is they didn't move a finger to make the software path usable until David Kanter called them out on the use of x87 instructions.

    Now, my impression is that nvidia is doing the same for raytracing. They didn't team up with the dev studios to implement open DXR solutions, they're implementing RTX toggles that will only work with their latest RTX cards and nothing else.
    One could argue that nowadays it's not even worth the hassle to implement a DXR GPGPU fallback for performance reasons, but 5 years from now said RTX toggles won't be enabled by Intel and AMD GPUs.

    Again, I could be wrong but it's the impression I got from the presentations so far.

    It's not something nVidia wouldn't do, and it probably wouldn't even reach the top 3 of abuse of dominant position with devs (e.g. unnecessary levels of tessellation and geometry that brought no visual upgrade and killed performance on their own older cards).
    A very similar thing they did in the past was Comanche 4, which wouldn't let DX8 pixel shaders work on anything but nvidia cards, and/or MSAA.
     
    Lalaland likes this.
  12. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,904
    Likes Received:
    6,187
    Yea I'm going to be keeping an eye on this as well. It will be quite interesting to see when the game releases if we could turn on these features on a 1070 for instance.
     
    Lalaland likes this.
  13. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    Nothing released so far leads me to believe that developers are coding to some proprietary API. Of course they will optimize for nvidia but ultimately the engine should be calling the DX12 DXR hooks that would be supported by any DXR implementation.

    There are examples of nvidia sponsored and marketed technologies e.g. HBAO+ that work just fine on competitor hardware because they use industry standard APIs. Now whether those implementations are optimized for competing architectures is a different story.
     
  14. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,184
    Likes Received:
    1,841
    Location:
    Finland
    That's not always the case though, NVIDIA has plenty of vendor checks in their history (not that I would think current RT stuff is tied to just their hardware, I think it's DXR-compatible all the way)
     
  15. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    10,978
    Likes Received:
    5,799
    Location:
    London, UK
    Microsoft's SDK is "experimental" and the API isn't locked down so may (and in all likelihood, will) change. I remember Nvidia getting aggressive when designing the Geforce 5 series and focussing on great 16-bit and 24-bit shader performance. 32-bit performance? Not so much. Guess what devs really wanted? 32-bit shader performance and that put Nvidia behind AMD for that graphics generation.
     
  16. vipa899

    Regular Newcomer

    Joined:
    Mar 31, 2017
    Messages:
    922
    Likes Received:
    354
    Location:
    Sweden
    Lets hope PS5 will contain RT hardware like RTX series, so we can see what aaa devs can do with that.
     
    OCASM and Heinrich4 like this.
  17. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,904
    Likes Received:
    6,187
    Its hard to believe they wouldn't have such a variant to contrast vs a non RT variant.

    They must have several console platforms that can launch at different times/price points all of them competing to be the one that actually goes on sale. I would have a hard time believing that Sony would not look into the possibility of an RT console that could launch in 2 years especially with all this noise and excitement over the last 1.5 months. They could easily be holding a RT console variant and see if it's ready to launch by 2021 and if not, go with their backup plan, being more or less our earlier expectations on performance.
     
    Heinrich4 and vipa899 like this.
  18. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,734
    Likes Received:
    11,208
    Location:
    Under my bridge
    Again, it needs iterating what exactly is 'ray tracing hardware' and if there are suitable substitutes. Could a CPU be enhanced to provide that functionality? Or can the shaders be tweaked? Will there be a development of a Memory Structure Processor that can be used in all sorts of structured memory access functions? Or some level of super-smart cache?

    AFAIK we haven't got a single piece of information about how RT hardware acceleration is implemented, neither from nVidia nor ImgTech, so we don't really know what the best thing going forwards is. The very notion of a 'ray tracing accelerator' may already be conceptually obsolete if the future, as I suddenly suspect having just thought of it, lies in an intelligent memory access module somewhere.
     
    Michellstar, BRiT and Lalaland like this.
  19. Lalaland

    Regular

    Joined:
    Feb 24, 2013
    Messages:
    596
    Likes Received:
    265
    To further this point, even if you go for a December 2020 launch (I do still hope it's 2019 personally) you're pretty far along in designing these things so it's in or it isn't at this point. The multiple possible designs phase has long since sailed into the good night with a solid focus on delivering the one or two skus they have chosen to launch with. PCB layout, chip layout, etc are if not complete well into the final check out of phase of design to allow for a few test spins of the design and various stress tests to decide final clocks et al. Happy to be corrected but my understanding from colleagues who were working on notebook designs in the past is that you decide on final design 18-24 months out from launch and then design the bits to be ready at least 6 months before launch to allow for production testing and the initial launch stock to be made.
     
    BRiT likes this.
  20. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,341
    Likes Received:
    438
    Location:
    Finland
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...