PS4 will use a A10 APU?

What's Cell real life Gflop rating?
25.6 GFlop/s per SPE at 3.2 GHz, and also 25.6 GFlop for the PPE. A 1:8 Cell would deliver 230 GFlops peak, 204 from the SPEs. The 1:7 in PS3 delivers 180 GFlops from the SPEs.

All of which is fairly immaterial as it's utilisation that counts. Where we have GFlop comparisons for HPC computing, I don't think we have any for games.
 
Cell's real FLOP count is hard to ever know, its fairly easy to max it out, but in real games you don't get that close really (often due to scheduling and waits).

*IF* and I'm not saying anything positive about this rumour either way, game emulation may be a lot less difficult than pure flops count. An automated converter from SPU to GPU wouldn't be that hard tbh, especially if the APU was modified with a SPU like DMA unit.
I suspect most games could be supported with an automated system, with just the more advanced needed custom work and/or onlive like streaming.

Xbox 360 like backwards compat seems doable with an APU like platform IMHO
 
Cell's real FLOP count is hard to ever know, its fairly easy to max it out, but in real games you don't get that close really (often due to scheduling and waits).

*IF* and I'm not saying anything positive about this rumour either way, game emulation may be a lot less difficult than pure flops count. An automated converter from SPU to GPU wouldn't be that hard tbh, especially if the APU was modified with a SPU like DMA unit.
I suspect most games could be supported with an automated system, with just the more advanced needed custom work and/or onlive like streaming.

Xbox 360 like backwards compat seems doable with an APU like platform IMHO

Mr Deano C! Not seen you around these parts for a while :cool:
 
Mr Deano C! Not seen you around these parts for a while :cool:

Been busy going Indie and now trying to make my first game (see sig). Guess I made the classic mistake of keeping head down whilst going indie rather than putting head up more.

But i'm not dead, I think...
 
Cell's real FLOP count is hard to ever know, its fairly easy to max it out, but in real games you don't get that close really (often due to scheduling and waits).
That's an issue affecting all processors, and we need utilisation counts for them for these comparisons. If a 30 GFlop CPU gets 85% sustained use, and a 120 GFlop CPU gets 15% sustained utilisation, the lower peak CPU is the better option, but it all depends on that hard-to-come-by utilisation rate. This is where a fully unified programmable architecture, not CPU+GPU, would gain better utilisation in the same way unified shaders do, as there's less waiting around.

Been busy going Indie and now trying to make my first game (see sig). Guess I made the classic mistake of keeping head down whilst going indie rather than putting head up more.
Game looks like a great concept! You need to get on PR. I'm sure disability organisations will give it coverage as a starting point, but you don't want the attraction to be the characters alone, but the game concept. XCom/Syndicate meets Cannon Fodder is a good sell to gamers IMO, and you want to get gameplay footage out showing that so the likes of EG can post an article, "ex Heavenly Sword dev kickstarting XCom/Cannon Fodder game."
 
Been busy going Indie and now trying to make my first game (see sig). Guess I made the classic mistake of keeping head down whilst going indie rather than putting head up more.

But i'm not dead, I think...

I've supported you dude!!! Loved you work and I never knew you worked on Silent Hill 2!!!!
 
Been busy going Indie and now trying to make my first game (see sig). Guess I made the classic mistake of keeping head down whilst going indie rather than putting head up more.

After reading the kickstarter, it seems only fitting that you're giving yourself a handicap here ...

By the way, I would have thrown these heroes into a blender and reassemble using different parts - a blind black guy, a bald guy in a wheelchair, not so unusual as you might think. ;)

But i'm not dead, I think...

But programmers in crunch mode can appear so ... ;)

I just started an interesting little project myself, that I'm hoping will result in an interesting game. It's something I have wanted other peope to make, but as no-one seems to be doing it, I'm trying to do it myself. I had a first go at it in 2007, and then my son happened. But now I think I can pull it off. After two weeks I'm already at the most crucial point where I'm going to find out if the tech works. Quite exciting.
 
Yeah i noticed that some of the character had been sort of done, but not really in computer games (at least as leads) I hoped a little familiarity would help those who really don't get the idea of disabled lead characters...

I've sent emails to most pc gaming website and a few disability ones. Of course and reddits up or posts on forums etc. gratefully received or where I should be posting, how I should get the word out.

I guess I should make my own thread here in the PC games

Also questions are good, help me get some info out I possibility haven't thought of.
 
I really have no knowledge in this area (at all) but how effective would the IGP in the A10 be in GPGPU tasks? Would it be possible to include a dedicated GPU for actual rendering and use the IGP for other purposes?
 
I really have no knowledge in this area (at all) but how effective would the IGP in the A10 be in GPGPU tasks? Would it be possible to include a dedicated GPU for actual rendering and use the IGP for other purposes?

That what all the rumours have been saying all along...
 
If there's an external GPU, and the internal GPU is customized exclusively for processing work, would they be able to remove ALL shading and texturing units, then add a lot more processing cores?
 
If there's an external GPU, and the internal GPU is customized exclusively for processing work, would they be able to remove ALL shading and texturing units, then add a lot more processing cores?
The shading units are the processing cores and the texturing units are glorified load/store units and the path to the cache/memory hierarchy ;).
 
What about the ROPs ?
Would be quite a redesign for maybe 10% more shader units. Stripping the display outputs and UVD/VCE is probably easier and more effective. But I'm not sure we will see an APU + discrete GPU in PS4, so it may be a moot point anyway. ;)
 
How about 2 Piledriver/Steamroller Modules + 960 GCN Shaders/60 TMUs/24 ROPS @ 800 MHz or so with 3x GDDR5 memory controllers on 28 nm?

3 memory controllers connected to 3 GB of GDDR5 makes sense, considering it's one set of memory traces, GDDR5 can be pushed quite fast these days (up to 1500 MHz), and 3 GB would be quite appropriate for the capabilities. The complexities of adding a large amount of eDRAM and a large pool of DDR3 on a 128 bit memory interface do not seem to be worth it too me. Something like 32 MB eDRAM would most certainly have to be via an MCM adding further complexity.

1300 MHz GDDR5 x 192 bit interface = 124 GB/s. Quite a bit for a GPU at 1536 GFLOPS at 800 MHz along with the two Piledriver modules.
 
Last edited by a moderator:
That sounds reasonable. I wonder what the die size and TDP for such a chip would be at 28nm? Surely TDP could be kept below 200w quite easily - it would probably be pretty close to 150w with moderate clocks (3.2-3.6GHz) on the CPU portion to make a rough estimate myself.

For a chip approaching 200w they could include a full Pitcairn (or cut down sucessor) with 256-bit memory interface and more agressive clocks all round.
 
That sounds reasonable. I wonder what the die size and TDP for such a chip would be at 28nm? Surely TDP could be kept below 200w quite easily - it would probably be pretty close to 150w with moderate clocks (3.2-3.6GHz) on the CPU portion to make a rough estimate myself.

For a chip approaching 200w they could include a full Pitcairn (or cut down sucessor) with 256-bit memory interface and more agressive clocks all round.

imo better remove those piledriver cores (it seems that steamroller has been pushed back) & replace them by some Jaguar cores. Piledriver is pretty big and powerhog. You could put a much better GPU inside the APU. I really dont want to see the GPU part being stripped down to make place to an average power hungry CPU.
 
Not in CPU flops it isn't. On that aspect, cell holds up surprisingly well despite its age. You need an intel sandy bridge-E or similar to really beat cell on float performance.

You only need a 3.3 Ghz quad Sandybridge like the 2500k to beat Cell in floating point performance on paper.

Haswell would do the same in a dual core model while offering multiple times more single threaded performance, being vastly easier to programme and drawing a tiny amount of power. Seems like a far better choice (technically) for a console CPU. And that's not even considering what the on die GPU would bring to the table.
 
Last edited by a moderator:
Back
Top