Could PlayStation 4 breathe new life into Software Based Rendering?

Status
Not open for further replies.

onQ

Veteran

PS4+GPGPU.jpg



Sony seems to have put a lot of work into making sure the PS4 will be able to easily handling General-Purpose Computing, which has me wondering what can be achieved on the PlayStation 4 using the CPU & GPGPU together for computing,

If you had to guess how far would you say ~ 2TFLOPS of compute & 8GB of GDDR5 can go with Software Based Rendering in a Console?
 
If you by that mean something like drawing polygons using software rasterization like back in the (primarily early/mid-) 1990s, then what would be the point.

If you're thinking more esoteric approaches like constructive solid geometry, bezier patch surfaces, splines, voxels and so on, you run into the same issues that have prevented these from becoming mainstream rendering approaches.

You could undoubtedly find uses for all of these in experimental, and/or old-skool-looking games, but it's really hard to build organic-looking objects, and/or animating them using some or several of these approaches, which would likely prevent any such games from going mainstream.

Not saying it couldn't, or shouldn't happen. I'd love to see some awesome software rendered graphics. However it would need some kind of wizard game designers and programmers to find the right techniques for the right game to pull something worthwile off, rather than creating just a tech demo for the hell of it, with a flawed gameplay concept...
 
Last edited by a moderator:
If you by that mean something like drawing polygons using software rasterization like back in the (primarily early/mid-1990s), then what would be the point.

If you're thinking more esotheric approaches like constructive solid geometry, bezier patch surfaces, splines, voxels and so on, you run into the same issues that have prevented these from becoming mainstream rendering approaches.

You could undoubtedly find uses for all of these in experimental, and/or old-skool-looking games, but it's really hard to build organic-looking objects, and/or animating them using some or several of these approaches, which would likely prevent any such games from going mainstream.

Not saying it couldn't, or shouldn't happen. I'd love to see some awesome software rendered graphics. However it would need some kind of wizard game designers and programmers to find the right techniques for the right game to pull something worthwile off, rather than creating just a tech demo for the hell of it, with a flawed gameplay concept...

Would something like a Billiards Game with real time Ray-tracing work?
 
Yeah probably, but it sounds gimmicky, and thus doubtful. Would raytracing your billiards balls (perhaps using CSG for perfect spheres) make for a better game somehow? Probably not. You could just as well use standard 3D rendering techniques, cube maps for reflections and so on and sink a million-plus polygons into each ball just for the hell of it to make them round even on tight closeups.

End result would look pretty similar; you could get better reflections with raytracing (mirror balls reflecting each other), but then you reduce graphics into a rather pointless checkbox feature, and beyond a few computer graphics gearheads here or there - who cares about self-reflecting billiards-balls? :D
 
I'd say no that 1.84 gflops figure you have is not for the cpu
and software rendering is still painfully slow on a cpu even compared to a low end gpu
pc' cpu's have twice the power of the new ps4 cpu and even so no ones really bothered with software rendering
 
I'd say no that 1.84 gflops figure you have is not for the cpu
and software rendering is still painfully slow on a cpu even compared to a low end gpu
pc' cpu's have twice the power of the new ps4 cpu and even so no ones really bothered with software rendering

But I think he means software rendering on the GPU. Seems rather silly, but depending of how unconventional you wanna go it might make sense... Although very unlikely.

http://www.tml.tkk.fi/~samuli/publications/laine2011hpg_paper.pdf
 
If you had to guess how far would you say ~ 2TFLOPS of compute & 8GB of GDDR5 can go with Software Based Rendering in a Console?

I do not think those words mean what you think they mean.
 
I'd say no that 1.84 gflops figure you have is not for the cpu
and software rendering is still painfully slow on a cpu even compared to a low end gpu
pc' cpu's have twice the power of the new ps4 cpu and even so no ones really bothered with software rendering

I'm looking at the Design of the PS4 SoC & it's looking like the Voltron of The Cell Processor or Larrabee when the CPU & GPGPU form together.

Notice all the 8's?

  • 8 Jaguar Cores
  • The GPGPU will have 8 Compute-only pipelines
  • Each pipeline has 8 queues of a total of 64
  • Rumor of 8 very wide vector engines attached to the CPU
  • all using 8GB GDDR5 Unified Memory ( not really part of my argument but it does have 8)

Seems like it's all made to work together as a powerful CPU if a dev choose to use it all for Computing.

I do not think those words mean what you think they mean.

Why do you say that?
 
I think he states that because in your "software"' rendering scheme I would guess that you would still use all the the fixed function units within the GPU: rasterizer, ROPs, tex units, etc.
To me what you state is more about breaking the pretty inflexible graphic pipeline as presented in most common APIs using compute APIs instead of really go "software".
Dice and others already breaks out of the graphic pipeline using direct compute in BF3.

I guess it is a semantic issue, when I think software rendering I think of an "all software" solution running on pretty generic hardware. So to me it rhymes with "close to no fixed function /dedicate hardware". I guess it is disputable.

But to your point I think that this gen devs are to break free of the different stages of the graphic pipeline as presented in the last (graphics) APIs, could be easier on the PS4 as I don't know what MSFT will let the devs do or not (/ what they allow in Durango APIs).
 
Last edited by a moderator:
I think it states that as in your "software"' rendering sheme I would guess that you would still use all the the fixed function units within the GPU: rasterizer, ROPs, tex units, etc.
To me what you state is more about breaking the pretty inflexible graphic pipeline as presented in most common APIs using compute APIs instead of really go "software".
Dice and others already breaks out of the graphic pipeline using direct compute in BF3.

I guess it is a semantic issue, when I think software rendering I think of an "all software" solution running on pretty generic hardware. So to me it rhymes with "close to no fixed function /dedicate hardware". I guess it is disputable.

But to your point I think that this gen devs are to break free of the different stages of the graphic pipeline as presented in the last APIs, could be easier on the PS4 as I don't know what MSFT will let the devs do or not (/ what they allow in Durango APIs).

That's why I made sure to say 'Software Based Rendering' & not 'Software Rendering'
 
I'm looking at the Design of the PS4 SoC & it's looking like the Voltron of The Cell Processor or Larrabee when the CPU & GPGPU form together.
No no no! Cell and Larrabee as software renderers are focussed on fully programmable instructions, with loops and branches and random memory access. GPUs work differently, and GPGPU is not a replacement for CPU processing. You cannot freely compute on GPU when using its full performance. GPU compute has to be considered a subset of all compute operations you could want to do. A software rasteriser aims to negate a CPU's lack of pure throughput by using specialist techniques to gain efficiency, but it cannot compete in performance to a GPU designed for the job of rasterising graphics. And progress in GPUs means some of the techniques in software rendering can be applied to the GPU rasteriser.

There's no future in 'software rendering' in PS4, or any APU. What we'll have is developers combining techniques as best, such as, for example, using a line drawing function on the CPU to render power cables, which isn't a good fit for the massively parallel nature of a GPU, and the compositing those power cables with the backbuffer using the GPU produced Z buffer as a mask. That'd be similar to Cell rendering volumetric clouds combined with RSX's triangles. There'll be hybrid rendering, but not software (free-form, use any algorithm you like) rendering as that wastes the performance of the GPU.
 
That's why I made sure to say 'Software Based Rendering' & not 'Software Rendering'
Well meant that way, I would say yes, devs are likely to break more and more often out of the graphics "pipeline".
Though I wonder how well the fixed function hardware is exposed in the compute APIs, Nvidia built a software render in Cuda, it, in the best case, achieved ~50% the performance of the same GPU used "properly" but the researchers stated that actually results could have been better if the hardware in the texture units had been presented in CUDA.
I'm not sure that I remember correctly (neither I remember the name of the presentation...) but it was pretty impressive that they came that close (1 or 2 power of two is not that far).
 
No no no! Cell and Larrabee as software renderers are focussed on fully programmable instructions, with loops and branches and random memory access. GPUs work differently, and GPGPU is not a replacement for CPU processing. You cannot freely compute on GPU when using its full performance. GPU compute has to be considered a subset of all compute operations you could want to do. A software rasteriser aims to negate a CPU's lack of pure throughput by using specialist techniques to gain efficiency, but it cannot compete in performance to a GPU designed for the job of rasterising graphics. And progress in GPUs means some of the techniques in software rendering can be applied to the GPU rasteriser.

There's no future in 'software rendering' in PS4, or any APU. What we'll have is developers combining techniques as best, such as, for example, using a line drawing function on the CPU to render power cables, which isn't a good fit for the massively parallel nature of a GPU, and the compositing those power cables with the backbuffer using the GPU produced Z buffer as a mask. That'd be similar to Cell rendering volumetric clouds combined with RSX's triangles. There'll be hybrid rendering, but not software (free-form, use any algorithm you like) rendering as that wastes the performance of the GPU.

I think you missed what I was saying completely & you also help to make my point

"Cell and Larrabee as software renderers are focussed on fully programmable instructions"

This is what I'm talking about with having the CPU & GPGPU formed together like the Voltron of The Cell Processor or Larrabee.

Devs writing the code for the CPU & the CPU & GPGPU doing the Job as 1 isn't this one of the points of HSA?
 
Well meant that way, I would say yes, devs are likely to break more and more often out of the graphics "pipeline".
Though I wonder how well the fixed function hardware is exposed in the compute APIs, Nvidia built a software render in Cuda, it, in the best case, achieved ~50% the performance of the same GPU used "properly" but the researchers stated that actually results could have been better if the hardware in the texture units had been presented in CUDA.
I'm not sure that I remember correctly (neither I remember the name of the presentation...) but it was pretty impressive that they came that close (1 or 2 power of two is not that far).
If you're talking about the paper milk linked earlier in this thread the results aren't as impressive as they seem at first glance. All shading was ignored by the paper meaning much of the GTX480 being compared against was idle.
 
Yeah probably, but it sounds gimmicky, and thus doubtful. Would raytracing your billiards balls (perhaps using CSG for perfect spheres) make for a better game somehow? Probably not. You could just as well use standard 3D rendering techniques, cube maps for reflections and so on and sink a million-plus polygons into each ball just for the hell of it to make them round even on tight closeups.

End result would look pretty similar; you could get better reflections with raytracing (mirror balls reflecting each other), but then you reduce graphics into a rather pointless checkbox feature, and beyond a few computer graphics gearheads here or there - who cares about self-reflecting billiards-balls? :D

csg?
 
"Constructive Solid Geometry", basically describing and drawng an object mathematically rather than building it up using an approximated mass of vertices/polygons.
 
I think you missed what I was saying completely & you also help to make my point
No, I understood. You have both the CPU and GPU working on a software rasteriser. But it won't work because the GPU is limited in what it can process. Okay, I'm sure you could design a renderer that'd be a good fit for both (CPU does searching, sorting, structuring, and generate parallel jobs for the gPu to churn through), but it makes little sense to take a GPU designed to process and rasterise triangles, with dedicated hardware to do that very quickly and efficiently, and then turn it to the less efficient job of general computing to implement a CSG Reyes renderer. The vision of the software renderer is a processor that's pure programmability, with no constraints in the types of workload it can process, and then design rendering algorithms that exploit its flexibility. Flexibility being the key word. Any processor that has its processing potential tied to certain ways of operating can't be exploited that way. An i7 will provide greater software rendering opportunities than a VPU. A VPU will offer a weak CPU and a load of limited vector units that can contribute to the parallel jobs but not the linear jobs that a software renderer would be hoping to use.
 
No no no! Cell and Larrabee as software renderers are focussed on fully programmable instructions, with loops and branches and random memory access. GPUs work differently, and GPGPU is not a replacement for CPU processing. You cannot freely compute on GPU when using its full performance. GPU compute has to be considered a subset of all compute operations you could want to do. A software rasteriser aims to negate a CPU's lack of pure throughput by using specialist techniques to gain efficiency, but it cannot compete in performance to a GPU designed for the job of rasterising graphics. And progress in GPUs means some of the techniques in software rendering can be applied to the GPU rasteriser.

There's no future in 'software rendering' in PS4, or any APU. What we'll have is developers combining techniques as best, such as, for example, using a line drawing function on the CPU to render power cables, which isn't a good fit for the massively parallel nature of a GPU, and the compositing those power cables with the backbuffer using the GPU produced Z buffer as a mask. That'd be similar to Cell rendering volumetric clouds combined with RSX's triangles. There'll be hybrid rendering, but not software (free-form, use any algorithm you like) rendering as that wastes the performance of the GPU.
I recall Need for Speed 3 featured both software and hardware rendering on the PC, and the software rendering was certainly more limited than the hardware rendering /the IQ was also worse/, but it allowed for some unique techniques absent in the hardware rendering.

Even so, as you say, software rendering usually takes places on the CPU, and requires minimal interactions with the GPU, if any, afaik.

I think it was John Romero who said that it is the future of 3D rendered graphics, which is wrong imho.
 
Status
Not open for further replies.
Back
Top