Could PlayStation 4 breathe new life into Software Based Rendering?

Discussion in 'Console Technology' started by onQ, Mar 2, 2013.

Thread Status:
Not open for further replies.
  1. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,082
    Likes Received:
    5,321
    As I said, the claim for MIMD on AMD GCN is based on running multiple programs on the same GPU. The hardware gets partitioned to reserve hardware resources for each program, effectively creating two virtual GPUs, hence a MIMD execution model. That's how I understand it. A single program(game) dispatching compute jobs to ACEs is not MIMD. The execution model for a typical game on any GCN hardware is what they call SMT or SIMT (not totally clear on the difference between the two).

    There is nothing different about the ACEs on PS4. It is the same compute model as any other GCN GPU. At the time it came out, it had more ACEs than Xbox One. I'm not sure if PC parts had 8 ACEs at the time, but they do now.
     
    #221 Scott_Arm, Oct 19, 2015
    Last edited: Oct 19, 2015
  2. upnorthsox

    Veteran

    Joined:
    May 7, 2008
    Messages:
    2,102
    Likes Received:
    378
    Why is that? That's entirely made up by you.
     
  3. upnorthsox

    Veteran

    Joined:
    May 7, 2008
    Messages:
    2,102
    Likes Received:
    378
    I don't agree with OnQ's pounding of square pegs into round holes here but putting unreasonable restrictions onto what software rendering is does nothing to further the discussion.
     
  4. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    9,934
    Likes Received:
    9,320
    Location:
    Self Imposed Work Exile: The North
    Is this a poor definition?
     
    ThePissartist likes this.
  5. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,527
    Likes Received:
    3,368
    Location:
    Barcelona Spain

    100% Compute rendering is software rendering using the CU like in Dreams, no razerisation.
     
  6. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    42,991
    Likes Received:
    15,156
    Location:
    Under my bridge
    "Software Rendering" was a phrase coined long before current highly programmable GPUs. It defined an architecture made up of a single processor type (homogeneous mutlicore/manycore) that were general purpose processors, with no intrinsic limitations in data types they were efficient with, being used to render with any algorithm at all, with the complete flexibility of a CPU.

    As an architectural goal, modern GPUs != Software Rendering

    What has happened is GPUs have expanded their flexibility to be able to do more, although they still have caveats with the type of data structures and associated algorithms they are efficient with (required parallelism). This has resulted in non-traditional rasterising options. That is, techniques that move away from the algorithms of the 90s which were implemented in hardware as pixel shaders, vertex shaders, TNL and ROPs. Experimentation in this field has led to things like compute based lighting being used with conventional rasterisation. It's also worth noting that GPUs have been using software for a while now, with pixel and vertex shaders being very flexible software programs.

    For the purposes of this thread, which I increasingly feel should just be closed as the S:R is terrible, let's define Software Rendering as 'non conventional image creation using techniques that would not fit a fixed-function GPU like GeForce2*.

    *Someone correct me on my choice of GPU if wrong!
     
  7. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,439
    Likes Received:
    273
    It's complicated to describe modern GPU architectures, but I view GCN as being a scalar/vector processor where the vector unit executes in SIMD fashion. There are a multitude of these scalar/vector processors, 4 per CU and many CUs. The instructions can come from multiple kernels which coexist on the GPU, but are loaded sequentially from the same queue or concurrently from different queues. Personally, I don't like to co-mingle SIMD and MIMD terminology but as you can see from the old AMD slide some people like to describe a CU as being SIMD and MIMD depending on the level at which you're thinking.
     
    chris1515 likes this.
  8. onQ

    onQ
    Veteran

    Joined:
    Mar 4, 2010
    Messages:
    1,540
    Likes Received:
    55
    No you said it has nothing to do with HSA & I explained that the PS4 APU was a product of HSA.


    You're still trying to find loopholes.
     
  9. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,082
    Likes Received:
    5,321
    What does HSA have to do with this? Say the PS4 APU is a product of HSA, then so what?
     
  10. onQ

    onQ
    Veteran

    Joined:
    Mar 4, 2010
    Messages:
    1,540
    Likes Received:
    55

    Why are you looking for my argument? It was simple. I looked at the design of the PS4 & it's specs & seen that it was made with compute in mind & asked could it breathe new life into software based rendering, seeing as it's a closed box built around compute. I then asked how far did y'all think software based rendering could go with the specs of the PS4.

    That seemed like a simple question to ask on beyond3d I thought, I guess it wasn't so I explained that the PS4 APU is like Cell / Larrabee when the CPU & GPGPU is used together for compute.

    People still said "no it's not gonna happen" but it did happen & now somehow the arguments are trying to be changed to all types of stuff.
     
  11. onQ

    onQ
    Veteran

    Joined:
    Mar 4, 2010
    Messages:
    1,540
    Likes Received:
    55
  12. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,082
    Likes Received:
    5,321
    You were correct in pointing out that the PS4 was designed for GPGPU which led to Sony making a significant investment in research of GPGPU rendering.

    I think people just take issue when you post things and don't back down when it's pointed out that the correlations you make, or the usage of terminology, are incorrect.

     
  13. mrcorbo

    mrcorbo Foo Fighter
    Veteran

    Joined:
    Dec 8, 2004
    Messages:
    3,900
    Likes Received:
    2,600
    My confusion here stems from HSA and compute queues being held up as reasons that the PS4 would be better suited to software rendering.

    In the case of HSA, it has always been my understanding that this framework exists to better enable different types of processors to work together on the same data. I don't see how useful it is though, given the relative processing power available, to have the PS4's CPU engaged in graphics workloads to any large degree. I more see this being used the other way around where the GPU can be used as a supplement to the CPUs processing capabilities for non-graphics workloads. Are there any types of work done in a software renderer that would be better suited to being done on the PS4 CPU's processing units vs. the CUs on the GPU?

    And the larger number of compute pipelines/queues have always seemed to me to be a way to better have work, well...queued up for those moments when the graphics pipeline was not using all of the compute resources. When that "gap" opens you want to be able to take advantage of it immediately and not have to still be setting up work on the front end. When you are specifically not using the graphics pipeline and doing everything with compute are the larger number of compute pipelines and queues *as much* of an advantage as they are when doing traditional rendering in addition to compute work? I'm sure they help to some degree, but is it enough of a difference to make some techniques viable that wouldn't be otherwise?

    I think there are elements in the PS4 APU's design that do open up more opportunities for software rendering, but I think the most important of those elements are common in any modern graphics processor.
     
  14. onQ

    onQ
    Veteran

    Joined:
    Mar 4, 2010
    Messages:
    1,540
    Likes Received:
    55
    What's wrong with that? You can't see Jaguar as the PPU & the GPGPU being used as a co-processor like the SPU's?


    _

    (4) a function to make the CPU take over the pre-processing to be conducted by the GPU.



    Look at the last line.


    [​IMG]




    Last line again.




    [​IMG]
     
  15. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,082
    Likes Received:
    5,321
    I'm not sure what "the Voltron of The Cell Processor or Larrabee when the CPU & GPGPU form together" is supposed to mean. If it just means the CPU can use the GPU as a co-processor for GPGPU, then you can say any PC with a CPU paired with a GPU that can do GPGPU/compute is like Cell or Larabee. It's not a particularly useful analogy.
     
  16. onQ

    onQ
    Veteran

    Joined:
    Mar 4, 2010
    Messages:
    1,540
    Likes Received:
    55
    You're still looking at the GPU by it's self when that's not the case , PS4 has a 8 core CPU connected to it's GPGPU through hUMA, when doing computing it's basically 1 big MIMD processor like the Cell processor.



    [​IMG]


    [​IMG]



    To explain: traditional queuing systems work by forcing the CPU to run through an operating system service and kernel-level driver when generating workloads for the GPU. This, AMD claims, introduces unwanted latency to the queue - and means that every time there's work for the GPU to carry out, the CPU has to get involved to kick things off.

    The hQ paradigm, by contrast, upgrades the GPU from a mere adjunct to the CPU to a processor of equal status. In this design, a given application can generate tasks queues directly on the GPU without the CPU getting involved. Better still, the GPU can generate its own workload - AMD's example here is raytracing, where one GPU task may generate several more tasks in its execution and which would previously have needed the CPU and operating system to be involved in queuing said new tasks - and even push work into the CPU's task queue. Equally, the CPU can push work into the GPU task queue without operating system involvement - creating a bi-directional queueing system which dramatically reduces latency and allows applications to easily push jobs to whichever processor is most appropriate.



    It's not the same as just having any PC with a CPU paired with a GPU.

    [​IMG]
     
  17. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,082
    Likes Received:
    5,321
    No.

    My understanding is that all multicore CPUs are MIMD. If you want to throw in the GPU as another part of the overall architecture as being MIMD, then whatever, but then all multicore CPUs paired with GPUs are an overall MIMD architecture.

    It's also only like the Cell if you ignore all of the architectural differences between Durango and GCN vs Cell's SPE and PPEs. In which case, why compare them at all?

    Yes, it is. HSA makes some things easier, like not having to copy to/from GPU memory, but the fundamental concepts are still the same.
     
    #237 Scott_Arm, Oct 21, 2015
    Last edited: Oct 21, 2015
  18. onQ

    onQ
    Veteran

    Joined:
    Mar 4, 2010
    Messages:
    1,540
    Likes Received:
    55
    It's not the same.



    [​IMG]
     
  19. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,082
    Likes Received:
    5,321
    I don't understand what you're trying to tell me by posting those images. What piece of information is relevant?
     
  20. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    42,991
    Likes Received:
    15,156
    Location:
    Under my bridge
    I've decided this thread isn't worthy of continuing and is a distraction from the real discussion of compute-based and novel rendering techniques. onQ's discussion techniques of one-liners, indirect response to questions, and quotes and images without explanation, just leads to confusion and arguments about what's being said rather than discussion of value to a B3D technical forum.

    I also conclude there's no point to a singular 'novel rendering techniques' thread as each technique worth discussing can have its own thread, such as Dreams' and tiled resources having their own tech threads.
     
    #240 Shifty Geezer, Oct 21, 2015
    Last edited: Oct 22, 2015
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...