Intel ARC GPUs, Xe Architecture for dGPUs

Discussion in 'Architecture and Products' started by DavidGraham, Dec 12, 2018.

Tags:
  1. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,988
    Likes Received:
    3,529
    Location:
    Winfield, IN USA
    It probably was, I've wanted a Vega since Siggraph 2019 when I attended the Capsacin event thingy.

    "Collecting" reasons? Mebbe we could work a trade, I have an R300 signed by Terry Makedon and a Gemini that doesn't work. :p

    (Nah, I can't give up things like that either. I DO get the collecting thing. ;) )
     
    Lightman and Rootax like this.
  2. HLJ

    HLJ
    Regular

    Joined:
    Aug 26, 2020
    Messages:
    529
    Likes Received:
    869
  3. Digidi

    Regular

    Joined:
    Sep 1, 2015
    Messages:
    428
    Likes Received:
    239
    It’s really quiet here, what happened?
     
  4. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,401
    Likes Received:
    1,845
    Location:
    France
    Not a lot of leaks :)
     
  5. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,401
    Likes Received:
    1,845
    Location:
    France
    Some principles remind me of VRS ?
     
  6. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    Their driver team has still years of up bringing to do ...

    No developer ever tests their games on Intel HW and people just plainly expect their games to work there. A developer one time intentionally released a broken application that crashes on Intel HW and their excuse was that they don't support Intel graphics. In the recent past it was all too common for them to introduce hack and workarounds in their drivers just for some games to be able to boot up. Out of all the graphics vendors, it is intel who faces the most amount of injustices by far compared to the others ...

    As a relief they won't have to focus so much on legacy hardware anymore prior to Xe and concentrate making hacks/workarounds on their current driver stack for their upcoming hardware ...
     
  7. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    DG2 being Q1'22 and meh and PVC being shit and H2'22 have something to do with it.
     
    Lightman and digitalwanderer like this.
  8. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,057
    Likes Received:
    3,114
    Location:
    New York
    Yeah it's like VRS on steroids. VRS is a spatial optimization that uses shading results from nearby pixels in the same frame. Texture space shading reuses shading results from the current or previous prior frame. What I dont get is how does TSS decide that it's safe/accurate to reuse an old shading result if the view or lighting changes.
     
    Putas, DavidGraham and PSman1700 like this.
  9. Dampf

    Regular

    Joined:
    Nov 21, 2020
    Messages:
    284
    Likes Received:
    474
    Very interesting, so the texture space shading part of Sampler Feedback can be emulated in software, but hardware is 3.1x faster. Texture Space Shading 3DMark test is coming soon (my hope was it was about the streaming part, but I guess without DirectStorage that doesn't make much sense right now)

    Their results with Sampler Feedback Streaming are pretty interesting as well. 350 GB assets fit within 230 MB of physical memory? That's bonkers! This technique truly is a game changer.
     
    BRiT and PSman1700 like this.
  10. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    With texture space shading or object space lighting, our goal is to apply shading before rasterization. What is common in both deferred and forward renderers is that shading occurs during or after rasterization ...

    The most naive solution would be is that you shade every texel in object space (texture) but hopefully not. A better solution could involve using feedback maps (sampler feedback) to record what texels are being requested and we can limit our expensive computations to the requested texels. Other solutions calculate the estimated the projected area of the objects and break their texture down to 8x8 tiles each of which can be individually requested for shading ...
     
    Jawed, DegustatoR and BRiT like this.
  11. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,402
    Texture space shading was used in pure s/w in Ashes of the Singularity (and another couple of games on that engine) and was promoted as a h/w feature of Turing at its launch.
    SFS is an extention on this idea I believe which was added to DX eventually.
     
    PSman1700 likes this.
  12. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    What do you mean with 'SFS'?

    Do we already know anything about Intel support regarding raytracing and mesh shaders? I think neither is confirmed yet?
     
  13. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,402
    https://microsoft.github.io/DirectX-Specs/d3d/SamplerFeedback.html
     
    PSman1700 and JoeJ like this.
  14. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,402
    PSman1700, Newguy, Dictator and 3 others like this.
  15. Dictator

    Regular

    Joined:
    Feb 11, 2011
    Messages:
    682
    Likes Received:
    3,969
  16. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,402
    Interesting that they recommend using TraceRay over inline invocations, I wonder why.
    AFAIR AMD recommends the opposite? And Nv is basically do whatever here.
     
  17. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,057
    Likes Received:
    3,114
    Location:
    New York
    From Microsoft: "The basic assumption is that scenarios with many complex shaders will run better with dynamic-shader-based raytracing. As opposed to using massive inline raytracing uber-shaders. And scenarios that would use a very minimal shading complexity and/or very few shaders might run better with inline raytracing. Where to draw the line between the two isn’t obvious in the face of varying implementations. Furthermore, this basic framing of extremes doesn’t capture all factors that may be important, such as the impact of ray coherence. Developers need to test real content to find the right balance among tools, of which inline raytracing is simply one."

    If AMD has a different recommendation and their hardware always prefers inline then it could mean that they handle uber shaders better or their dynamic scheduling just isn't up to the task.

    Was this meant to answer the question on when cached texels from prior frames are used/discarded? I don't think it does.
     
  18. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    The concept behind TSS/OSL is that texels that don't get requested for shading can all be used to cache the results ...

    If we take dynamic lights as an example then it's projected area in object space will be used to generate shading requests on those specific texels therefore prior results on the same area get implicitly rejected ...
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...