Predict: The Next Generation Console Tech

Status
Not open for further replies.
I don't see the benefit of two GPUs rendering independently, unless they are also capable of working on the same workload. For deferred rendering, you could have them both render the G-buffer, say, and then divide up other buffers including custom renderers (thin-object rendering pipeline using non-triangle based GPGPU techniques). Otherwise, if they can't be combined to the same workload, you could have half your GPU resources sat around doing nothing.

Then again, I guess they may have one eye on a GPU being a GPGPU compute cluster. Process physics on one, graphics on another, and then turn the physics GPU to graphics when phsyics have been calculated.
 
Wasn't there something from LucidLogix allowing that? I mean, iirc, their software allows to abstract the complete DX/OGL layer and make it so that any GPU can render anything in the scene... not sure if that ever was released or if it even works well, but ... well, I dunno^^
 
Turn the question around for a moment. Assume that two GPUs could work on the same workload very efficiently (who knows, some kind of tile based stuff) and that they can share memory pools efficiently for textures and so on. What then would be the advantage of doing so, that would be an advantage over spending the same budget on a bigger, newer, single chip solution, that would make it make sense in the first place?
 
Turn the question around for a moment. Assume that two GPUs could work on the same workload very efficiently (who knows, some kind of tile based stuff) and that they can share memory pools efficiently for textures and so on. What then would be the advantage of doing so, that would be an advantage over spending the same budget on a bigger, newer, single chip solution, that would make it make sense in the first place?

Again over my pay grade, but it's often stated here you get better yields on two smaller chips of the same area, than a single chip.

The other would be for achieving more power than possible in a single GPU, but that seems just crazy.
 
More importantly, would putting two GPUs on that problem be more efficient?
 
But why stake so much on something that appears, well, not exactly to have lit the world on fire to date?

True, but I believe gaming is the place for some 3D impact more than the other. Of course there would be madness to limit the use of 2 GPUs to this only
 
a little apu with few advanced gpgpu shaders that work in tandem with the cpu as fp coprocessors, and with all dumb fixed function like video decoding
then a big bruteforce gpu with tons of shaders and tu optimized for graphic
when you are on the dashboard or watching a movie you shut down all the non memory related transistors on the gpu, and use only the apu

it's a fehu&liolio trademarked idea

next will be a singlechip ssd used as smart cache between the ram and mass storage

after this the moon
 
VG247 also seems to think CrossFire and Scalable Link Interface actually work like 3dfx's ScanLine Interleaving :D
Only possible way I can see for "dual gpus in console" is APU + discrete GPU, in which APU's GPU is used for UI rendering and perhaps some GPGPU tasks, discrete GPU for actual game rendering
 
I don't see the benefit of two GPUs rendering independently, unless they are also capable of working on the same workload. For deferred rendering, you could have them both render the G-buffer, say, and then divide up other buffers including custom renderers (thin-object rendering pipeline using non-triangle based GPGPU techniques). Otherwise, if they can't be combined to the same workload, you could have half your GPU resources sat around doing nothing.

Then again, I guess they may have one eye on a GPU being a GPGPU compute cluster. Process physics on one, graphics on another, and then turn the physics GPU to graphics when phsyics have been calculated.

I've just about given up looking for the article, but I recall reading something within the last month or two how a dual GPU set up may be ideal in a console. It was also mentioned how it may be easier to cool.

I could be way off the mark here, I'm going to look for the article again after work.
 
Unless, APU = Xbox Lite settop box for live and apps. APU+GPU = Xbox Premium for all settop functionality and high end games.

They hit both a sub $199 market and a 399+ market at launch.

YES!! that actually sounds like bloody good business sense..they could fit out the xbox lite with just an APU mimicking x360...use that for 'core' which wouldn't have to include a HDD (360 doesn't) or Kinect bolt on..

Then for the 'Premium'- double ram,higher clocks,HDD,KINECT 2, discrete HD 7850...sounds almost sensible:D
 
Right well, I was only joking/teasing about the 6670 rumour being in the APU part. I know this new article mentioned a discrete.
 
I think we are all dreaming a sweet dream prepped to be dashed by AlStrong if we get a 6 Core (3 module?) APU (600GFLOPs?) and a Pitcairn class GPU with Kinect2.
 
Right well, I was only joking/teasing about the 6670 rumour being in the APU part. I know this new article mentioned a discrete.

But on a serious note, it would explain some of the divergent rumors. The 6670/6650 (frequency differences) are 118mm^2 on 40nm and range from low 600GFLOPs to shy of 800GFLOPs. Something in that class on 28nm as an APU is pretty much inline with what AMD is projecting into the 2013 timeframe.

Btw, surprised Farid has not descended with a booming laugh and linking back to his prediction that MS/Sony would consolidate and co-release a console. He could still be right, but minimally it looks like the hardware itself may be consolidating to a much finer range with services being the major difference.
 
An APU with a 6670? :p





ಠ_ಠ


--------

PS4 Rumours moved/merged: http://forum.beyond3d.com/showthread.php?t=61745
Don't remind me of that especially with the noise of a release in 2014 ... that would indeed be super underwhelming and perform worse than a barely custom Kaveri.

I wondered about about an APU in dual graphics but I also wondered about simply two SoC put together. I start to wonder if SOny could put MS into the sit they had with the PS3 this gen ie the complicated system to code for.

If I were to put together previous rumors and the one of the day, I may favor the option of two SOCs working together versus 1 SoC+GPU. Huge disclaimer for what the rumors are worse we heard that Durango may cheap with more RAM than the PS4 (which makes me thing that the PS4 could be simply a Kaveri using GDDR5 so facing a 2GB limitation).

What I think is that MS may choose a simpler GPU over Sony (Northern Island? I don't remember exactely those code name) and really more on the CPU for graphics. I remember reading presentation where the CPU (traditional CPU not Cell like) were playing a high role in some part of the graphic pipeline, my memory is iffy but I wonder if it was about lightning (GI or radiosity?).

Honestly my memory is iffy and my knowledge on the matter thin to say the least, let sum up as it could make sense. Clever members could dig further.


Moving to the CPU we hear everything and its contrary... still hairy speculation is hairy for a reason... :LOL:
At this point I may start to consider a bit more seriously the option for MS to continue with throughput oriented cores. GPGPU programming is complicated so the CPU could find it-self doing quiet some number crushing and that would explain the choice of a less potent GPU in GPGPU department, why pay for the "fat" in GCN if you're not going to use the benefit?
Not that I discard GPGPU all together but could be moved to the GPU only the task that are the most natural for the device (it's not like there is no GPGPU workload HD6xxx gpus can handle with success).

With the noise about 8/6/4 cores MS going with a Xenon2, power a2 under disguise or new throughput oriented cores, can make sense. It's tiny, pretty low power, pump up the SIMD and BC with the 360 should be a breath.

As an option the APU/SoC could consist in a slightly reworked power a2 module and a hd6670.

We heard noise about IBM and GF so I would expect 32nm process. Half of Llano compute resources consist of the GPU. I don't belive that adding one SIMD would make such a big difference. On the other hand a power a2 module is freaking tiny. Including the IO / mem controller / glue logic / the chip would remain a "sexy" package with sane power consumption. It would be even sexier using TSMC 28nm process (that should be just fine in 2014).
I would put my bet in between 150 and 200 sq.mm (closer to the former though).

We saw on some AMD GPU that GDDR5 controller consumes its fair share of power (like +10 watts). On some other slides with saw the comparitive cost of GDDR5 and DDR3... there is no match, ddr3 is significantly cheaper (more than a x2 factor). Mixing in the noise about Durango having more RAM I would go as far as saying that MS may go with DDR3 and not the most expansive ones : preparing for a shit load of negative comments, luckily there is no longer a reputation system lol:
this emoticon never existed for a reason lol

As I see it and with the assumption that Durango would be pack more raw power than the ps4, I could definitely see something really close to actual Cell blades:

2 SoC/APU, connected to each other by a fast and coherent link.
each SoC is connected to 2GB of DDR3 (not the highest density Chips, not the most expansive one), it's a flat memory space / UMA even though physical implementation say otherwise.

Overall the system would pack significantly more muscles than the PS4, say +50% using peak FLOPS figures. It would have twice as much memory. Might consume significantly less power.

On the down sides, it would be significantly trickier to program for than PC/PS4 which sounds weird coming for MS but may be MS had no choice to reach the performance goals.
Sony may be in MS seat this gen with a system that do better with less. MS may know it and postponing the launch so a hand few AAA titles using the system extra power are ready for launch.

I can't come with anything hairier than that at the moment... :LOL:

EDIT
Right well, I was only joking/teasing about the 6670 rumour being in the APU part. I know this new article mentioned a discrete.
The idea is not that crazy from my POV.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top