Could PlayStation 4 breathe new life into Software Based Rendering?

Status
Not open for further replies.
Funny that people continue to troll even after I been proving to be right so many times when they tried to make me out to be crazy. (Hello Shifty Geezer)
AFAICS DSoup's post was just a joke mixing together titles that refer to all the qualities you wanted to see in a game.

As for being right, people can read through this thread at your points and agree or not (
I disagree ;)
). The only real software renderer I see at the moment is MM's Dreams, and it's not using the APU is a magical new HSA solution as you were suggesting PS4 is uniquely positioned to do. It's using GPU compute. Then we have the likes of Sebbbi's work moving the rendering of traditional triangles fully over the to GPU, having it create the drawing jobs, which again doesn't require PS4 nor HSA. Other points raised like compute-based lighting aren't software rendering. At the root of the disagreements in this thread is a difference in view of what Software Rendering is which isn't worth discussing again.
 
AFAICS DSoup's post was just a joke mixing together titles that refer to all the qualities you wanted to see in a game.

As for being right, people can read through this thread at your points and agree or not (
I disagree ;)
). The only real software renderer I see at the moment is MM's Dreams, and it's not using the APU is a magical new HSA solution as you were suggesting PS4 is uniquely positioned to do. It's using GPU compute. Then we have the likes of Sebbbi's work moving the rendering of traditional triangles fully over the to GPU, having it create the drawing jobs, which again doesn't require PS4 nor HSA. Other points raised like compute-based lighting aren't software rendering. At the root of the disagreements in this thread is a difference in view of what Software Rendering is which isn't worth discussing again.

The PS4 APU is a product of HSA, the reason that games like Dreams is happening is because it's design made it easier for the devs to take advantage of compute.
 
Signed distance field rendering is possible on nVidia GPUs. It isn't advanced by HSA. The only reason MM are pushing that boundary is because they are a fully supported 1st party company given the freedom to explore (three years funded experimentation with no need to release a product), with a product vision that required lots of experimental techniques to be tried before MM could find a solution. That included producing tools to enable creation of assets, where everyone else is building engines around the existing tools (triangle based modellers, Maya, ZBrush).

Plenty of other devs on other hardware could implement the same sorts of compute-based solutions given the time and low-pressure funding and the same product requirements. PS4 hardware designs aren't an enabler in this case. I don't recall a single reference to HSA in Alex's Siggraph presentation on Dreams.
 
Last edited:
Signed distance field rendering is possible on nVidia GPUs. It isn't advanced by HSA. The only reason MM are pushing that boundary is because they are a fully supported 1st party company given the freedom to explore (three years funded experimentation with no need to release a product), with a product vision that required lots of experimental techniques to be tried before MM could find a solution. That included producing tools to enable creation of assets, where everyone else is building engines around the existing tools (triangle based modellers, Maya, ZBrush).

Plenty of other devs on other hardware could implement the same sorts of compute-based solutions given the time and low-pressure funding and the same product requirements. PS4 hardware designs aren't an enabler in this case. I don't recall a single reference to HSA in Alex's Siggraph presentation on Dreams.

+1 and some of the R&D is very interesting and could be achieve in other titles or for refinement renderer on a more powerful hardware.
 
Signed distance field rendering is possible on nVidia GPUs. It isn't advanced by HSA. The only reason MM are pushing that boundary is because they are a fully supported 1st party company given the freedom to explore (three years funded experimentation with no need to release a product), with a product vision that required lots of experimental techniques to be tried before MM could find a solution. That included producing tools to enable creation of assets, where everyone else is building engines around the existing tools (triangle based modellers, Maya, ZBrush).

Plenty of other devs on other hardware could implement the same sorts of compute-based solutions given the time and low-pressure funding and the same product requirements. PS4 hardware designs aren't an enabler in this case. I don't recall a single reference to HSA in Alex's Siggraph presentation on Dreams.

Where do I say that this is only achievable on PS4 with HSA? I said that PS4 APU is a product of HSA & Games like Dreams are happening because the design of the PS4 APU made it easier for devs to take advantage of compute.

You brought up my talk of the APU design to use as a loophole & I explain that it is a product of HSA & it's design did help this to happen.




Your words

No no no! Cell and Larrabee as software renderers are focussed on fully programmable instructions, with loops and branches and random memory access. GPUs work differently, and GPGPU is not a replacement for CPU processing. You cannot freely compute on GPU when using its full performance. GPU compute has to be considered a subset of all compute operations you could want to do. A software rasteriser aims to negate a CPU's lack of pure throughput by using specialist techniques to gain efficiency, but it cannot compete in performance to a GPU designed for the job of rasterising graphics. And progress in GPUs means some of the techniques in software rendering can be applied to the GPU rasteriser.

There's no future in 'software rendering' in PS4, or any APU. What we'll have is developers combining techniques as best, such as, for example, using a line drawing function on the CPU to render power cables, which isn't a good fit for the massively parallel nature of a GPU, and the compositing those power cables with the backbuffer using the GPU produced Z buffer as a mask. That'd be similar to Cell rendering volumetric clouds combined with RSX's triangles. There'll be hybrid rendering, but not software (free-form, use any algorithm you like) rendering as that wastes the performance of the GPU.

There is no future in it yet we are here looking right at it.
 
Last edited:
You brought up my talk of the APU design to use as a loophole & I explain that it is a product of HSA & it's design did help this to happen.
That has nothing to do with HSA and everything to do with compute.

There is no future in it yet we are here looking right at it.
The 'future' you are talking of is the "some fringe games" I refer to elsewhere in this thread (and Grall even suggests in post 2) but can't be bothered to go searching for. Yes, some novel engines will explore completely new ways of doing things with specific aesthetics that match the engine (just as they always have, even with 2D hardware), but that'll be enabled by the advance of compute and programmable GPUs and has squat to do with PS4's design. It'll happen with or without PS4. For everything else there'll be triangle rasterisation making use of the tools to create triangle meshes and the hardware built into the GPU designed to draw triangles, and PS4 won't make a dent into that standard.
 
That has nothing to do with HSA and everything to do with compute.

The 'future' you are talking of is the "some fringe games" I refer to elsewhere in this thread (and Grall even suggests in post 2) but can't be bothered to go searching for. Yes, some novel engines will explore completely new ways of doing things with specific aesthetics that match the engine (just as they always have, even with 2D hardware), but that'll be enabled by the advance of compute and programmable GPUs and has squat to do with PS4's design. It'll happen with or without PS4. For everything else there'll be triangle rasterisation making use of the tools to create triangle meshes and the hardware built into the GPU designed to draw triangles, and PS4 won't make a dent into that standard.

Still trying to jump through that loophole, this has nothing to do with HSA but the PS4 APU is a product of HSA & the APU was designed in a way that made it better at computing. I simply asked what could be done using the PS4 for software based rendering seeing as it was designed with better GPGPU computing in mind.
 
the apus in xbox or ps4, or socs in mobiles are integrated for costs reasons, it's the main driving force.

like fpus before, or memory controllers, now gpus tomorrow even memory
 
I simply asked what could be done using the PS4 for software based rendering seeing as it was designed with better GPGPU computing in mind.
No you weren't. You were talking about the close-knit relationship between CPU and GPU enabling better software rendering. See post 9...
I'm looking at the Design of the PS4 SoC & it's looking like the Voltron of The Cell Processor or Larrabee when the CPU & GPGPU form together.

Seems like it's all made to work together as a powerful CPU if a dev choose to use it all for Computing.
And post 14...
This is what I'm talking about with having the CPU & GPGPU formed together like the Voltron of The Cell Processor or Larrabee.
Devs writing the code for the CPU & the CPU & GPGPU doing the Job as 1 isn't this one of the points of HSA?
If all you're interested in is how GPGPU/compute can be used in software rendering, the title of this thread should be, "could next-gen GPUs and compute breathe new life into software rendering?" and it'd be platform agnostic and only mention HSA in passing as a question over if that brings any advantages or not. And then the discussion should be held in the Algorithms forum and would talk not about software rendering but about programmable GPU techniques for solving (parts of) image drawing, whether using triangle rasterisation or not.
 
No you weren't. You were talking about the close-knit relationship between CPU and GPU enabling better software rendering. See post 9...
And post 14...

If all you're interested in is how GPGPU/compute can be used in software rendering, the title of this thread should be, "could next-gen GPUs and compute breathe new life into software rendering?" and it'd be platform agnostic and only mention HSA in passing as a question over if that brings any advantages or not. And then the discussion should be held in the Algorithms forum and would talk not about software rendering but about programmable GPU techniques for solving (parts of) image drawing, whether using triangle rasterisation or not.

I asked about the PS4 because it has a 8 x 8 MIMD design with shared memory in a closed box.
 
The PS4 GPU was modified so it could have 8 Compute-only pipelines & each pipeline has 8 queues.

The 8 ACEs in PS4's GCN-variant, and recent GCN GPUs. That's GPU-level optimization I'm not comfortable talking about, because it's really beyond my layman's understanding. I understand the general concept, but efficiency gains are out of my league.
 
I asked about the PS4 because it has a 8 x 8 MIMD design with shared memory in a closed box.
Shared memory doesn't make any difference if you're not talking about CPU<>GPU (HSA) interaction. For GPU compute, it works from its RAM pool.

The ACE's don't enable compute. They only improve efficiency of CU utilisation. There's nothing PS4's CUs can do that larger discrete PC parts can't (and much better when those parts are high end). It's certainly not an '8x8 MIMD' design, whatever that is. It's a GPU with 18 CUs (processors) and 8 compute instruction dispatch engines with 8 instruction queues each. A single compute instruction dispatch engine would be able to do everything PS4's GPU can, just with having to do the jobs serially so less efficiently. But the actual novel rendering algorithms you're talking of don't require multiple ACEs and only require advancement of the core GPU processor architecture to support more varied code.
 
I asked about the PS4 because it has a 8 x 8 MIMD design with shared memory in a closed box.
The phrasing "8 x 8 MIMD" sounds like something very different than what's actually happening.

You realize that the ACEs aren't execution units, right?
 
The phrasing "8 x 8 MIMD" sounds like something very different than what's actually happening.

You realize that the ACEs aren't execution units, right?
Shared memory doesn't make any difference if you're not talking about CPU<>GPU (HSA) interaction. For GPU compute, it works from its RAM pool.

The ACE's don't enable compute. They only improve efficiency of CU utilisation. There's nothing PS4's CUs can do that larger discrete PC parts can't (and much better when those parts are high end). It's certainly not an '8x8 MIMD' design, whatever that is. It's a GPU with 18 CUs (processors) and 8 compute instruction dispatch engines with 8 instruction queues each. A single compute instruction dispatch engine would be able to do everything PS4's GPU can, just with having to do the jobs serially so less efficiently. But the actual novel rendering algorithms you're talking of don't require multiple ACEs and only require advancement of the core GPU processor architecture to support more varied code.

See number 4? The CPU & GPU is formed together as a MIMD processor.

"As for the "supercharged" parts, or the parts that SCE extended, he said, "There are many, but four of them are representative." They are (1) a structure that realizes high-speed data transmission between the CPU and GPU, (2) a structure that reduces the number of times that data is written back from the cache memory in the GPU, (3) a structure that enables to set priorities in multiple layers in regard to arithmetic and graphics processing and (4) a function to make the CPU take over the pre-processing to be conducted by the GPU."
 
That's not what Shifty and HTupolev are talking about though. You need to eloborate that 8x8 MIMD term. What I understand from the fourth is results of GPU work can easily be handed off to the CPU. I'm guessing memory mapping changes here without any copy operation. Which is great and possible on consoles only.
 
Last edited:
Status
Not open for further replies.
Back
Top