Predict: The Next Generation Console Tech

Status
Not open for further replies.
Some one else further expanding a similar integrated ARM GPGPU concept:

http://codingrelic.geekhold.com/2010/07/wwmd.html

I'll speculate they will tightly couple the GPU, allowing very low latency access to it as an ARM coprocessor in addition to the more straightforward memory mapped device. This is not unique: some of the on-chip XScale functional units can be accessed both as coprocessors for low latency and as memory mapped registers to get to the complete functionality of the unit. Having very low latency access to the GPU would allow efficient offloading of even small chunks of processing to GPU threads.
One possibility is to let the GPU directly access the ARM processor cache and registers. This would allow GPU offloading to work almost exactly like a function call, putting arguments into registers or onto the stack with a coprocessor instruction to dispatch the GPU. When the GPU finishes, the ARM returns from the function call. For operations where the GPU is dramatically better suited, the ARM CPU would spend less time stalled than it would take to compute the result itself.
If the ARM CPU supported hardware threads, it could switch to a different register file and run some other task while the GPU is crunching.
Or if it had numerous cores? Sounds very similar to the MSNerd six core ARM concept with cores dedicated to GPU acceleration of physics and AI.
 
UK PSM3 also seems to know nextgen consoles announcement is imminent at E3 2012.
http://sillegamer.com/2012/02/11/psm3-ps4-for-e3-2012-more-at-tgs/
“Nintendo will attend E3 with Wii U, Microsoft will make a next-gen announcement, and Sony need more than Vita to convince the world they’re equipped for a new console era. Don’t hold your breath for hardware designs, but some proof that PS4 exists is vital to Sony’s success. Expect a name –Playstation 4, duh – at E3, with more at TGS.”
I don't care if the consoles don't launch for another two years but I just want them specs damn it :)!
 
Microsoft wont announce it at e3 12. Nor Sony.

Pachter for example said Microsoft told him they wouldn't, he even pressed them on it, saying it would be pretty rotten to lie about it. They confirmed, no next gen announcement at e3 12.

Oh and good luck getting specs out of e3 anyway, look how much trouble we're having getting Wii U specs and it's launching in 9 months!
 
Nintendo never gives specs. MS and Sont might, although I wouldn't be surprised if they decided not to and focussed on features, unless they think they have notable specs advantage and want to leverage that as part of their promotion strategy.
 
Can't wait for the LFLOPS numbers to come in as well as the LB/s memory bandwidth. >_>

(where LOL >> Tera, Peta, etc)

------

Anyways, even if they were to announce specs, I wouldn't want to hear about them more than a year before the hardware's release. It should be obvious why, but really, it's going to be about "expected/targeted/bullshit" specs vs what's closer to reality. Sure they can just omit the more specific aspects such as core count, clocks, amount of RAM... basically anything that can change easily.... which ultimately makes such an announcement worthless this early on.

"It'll be AUSUM. A next gen graphics core." w000000000 :p
 
Tim Sweeney says there's lots of graphics headroom left http://www.gamespot.com/unreal-1990/videos/tim-sweeney-dice-2012-session-6350144/

samaritanprocessingrel3yqn.png


e6pu6f.png


This is just to buttress my philosophy we need heavy duty next gen specs.
 
I dont really care about the power if its going to be eaten up by resolution. TV and PC monitor are two different use cases

How much more complex could it be at 720p but 2TFLOPS GPU? I think most devs will be forced to go 720p because the games that dont will have less stuff on screen compared to competition
 
Yeah, that's what jumped out at me as well, hell even the incredibly low spec 6670 rumor almost gets you there (768 Gflops).

But I think there's other threads for that (720P vs 1080P).

GTX 580 is ~1.6 teraflops. Interesting the requirements seem to have declined a bit from the 3 GTX 580's that the demo supposedly ran on...

That also reminds me AMD and Nvidia's vastly differing flops per performance in PC games. In console I suspect many more of AMD's flops would get used, therefore where in PC 6670 is nowhere near half GTX580 (as it's flops ratings suggests), in console it might actually be.

It's also notable in the talk Sweeney seems to place a lot on 3D stacking as a big hope to continue Moore's law far into the future.
 
Yeah, that's what jumped out at me as well, hell even the incredibly low spec 6670 rumor almost gets you there (768 Gflops).

But I think there's other threads for that (720P vs 1080P).

GTX 580 is ~1.6 teraflops. Interesting the requirements seem to have declined a bit from the 3 GTX 580's that the demo supposedly ran on...

That also reminds me AMD and Nvidia's vastly differing flops per performance in PC games. In console I suspect many more of AMD's flops would get used, therefore where in PC 6670 is nowhere near half GTX580 (as it's flops ratings suggests), in console it might actually be.

It's also notable in the talk Sweeney seems to place a lot on 3D stacking as a big hope to continue Moore's law far into the future.

The decline would be from the drop in resolution. As pointed out by someone in the GAF thread the demo at it's current resolution needed ~4.4TFLOPs (three 580s) to run. How it sounds to me is Epic couldn't or didn't "optimize it to one 580". And based on what we know of Wii U and current Xbox 3 rumors, neither of those consoles IMO are running Samaritan at 720p/30 even with consoles being more efficient.
 
Good Samaritan run with 4xMSAA as well don't forget, One reason why the frame rate is low.

You could completely disable anti-aliasing on console which would reduce the requirements even more.
 
The decline would be from the drop in resolution. As pointed out by someone in the GAF thread the demo at it's current resolution needed ~4.4TFLOPs (three 580s) to run. How it sounds to me is Epic couldn't or didn't "optimize it to one 580". And based on what we know of Wii U and current Xbox 3 rumors, neither of those consoles IMO are running Samaritan at 720p/30 even with consoles being more efficient.

Ehh? as I noted even the incredibly low spec 6670 is rated at 768 Gflops, or not that far away from the supposed 1.1 needed @720P.

Or another way as the slide says, 4.4X 360.
 
Pitcairn XT/PRO 2+ TFLOPs @ 90-120W(?) would provide about 10x leap over Xenos and it is coming soon.

MS should be looking this level of tech
 
Ehh? as I noted even the incredibly low spec 6670 is rated at 768 Gflops, or not that far away from the supposed 1.1 needed @720P.

Or another way as the slide says, 4.4X 360.

AMD TFLOPs /= nVidia TFLOPs

For example they said it takes 2.5TFLOPs to run it at 1080p. Is that two 580s or one 6970? Considering it took three 580s to run it at supposedly 2560x1440, I think it's safe to say 1080p won't run on one 6970 and likewise 720p on one 6670. But at the same time I don't think IGN's info about the 6670 being in Xbox 3 is complete, but we'll see.
 
AMD TFLOPs /= nVidia TFLOPs

I would even go further and say that AMD FLOPS from Xenos /= AMD VLIW4 Flops /= AMD GCN Flops (at least the usable, not theoretical ones). You'd hope that if the clocks and shader counts were the same, the new architectures would be able to achieve better utilization and efficiency from theoretical flops.

I think a modern Xbox Next GPU even if it's only 3-4x more powerful when measured on only theoretical flops, will be far more powerful in reality under actual game loads.
 
Last edited by a moderator:
^ I agree with the first part, but I don't know if that's the case considering benchmarks of Tahiti vs a 580. Of which at the same time I don't see a Tahiti-level GPU in a console either.

Dude.. do you see Nvidia on 360? If they use multiples of it´s not really hard to figure.

Do you see Samaritan on AMD hardware?
 
Last edited by a moderator:
^ I agree with the first part, but I don't know if that's the case considering benchmarks of Tahiti vs a 580.

I'm pretty sure Nvidia flops are mostly more efficient in the world of PC software, where the hardware molds to the software.

In consoles it's vice versa, I have a feeling AMD's flops would be more utilized.

So there isn't some magical difference between Nvidia and AMD flops like you imply. At least, it only exists in PC software. Nvidia just spent a lot of transistors eking maximum utilization from it's flops from sloppy PC game code. In consoles, the software will be micromanaged to use whatever flops are available, just as 360 and PS3 games extract every ounce from the hardware.

Also worth nothing if the Kepler rumors are true, Nvidia is moving closer to AMD flop levels and architecture. Kepler is reportedly ~3 teraflops. Moving close to Tahiti.

For example they said it takes 2.5TFLOPs to run it at 1080p. Is that two 580s or one 6970?

The answer could well be "either". I'm of the mind that AMD is (perhaps, was, as noted above) a massively better GPU choice for consoles precisely because of it's brute force advantages, which may not be utilized on PC but would be on console.

Also two 580's would be 3.2 teraflops, so thats a bit misleading.
 
I'm pretty sure Nvidia flops are mostly more efficient in the world of PC software, where the hardware molds to the software.

In consoles it's vice versa, I have a feeling AMD's flops would be more utilized.

So there isn't some magical difference between Nvidia and AMD flops like you imply. At least, it only exists in PC software. Nvidia just spent a lot of transistors eking maximum utilization from it's flops from sloppy PC game code. In consoles, the software will be micromanaged to use whatever flops are available, just as 360 and PS3 games extract every ounce from the hardware.

Also worth nothing if the Kepler rumors are true, Nvidia is moving closer to AMD flop levels and architecture. Kepler is reportedly ~3 teraflops. Moving close to Tahiti.



The answer could well be "either". I'm of the mind that AMD is (perhaps, was, as noted above) a massively better GPU choice for consoles precisely because of it's brute force advantages, which may not be utilized on PC but would be on console.

Also two 580's would be 3.2 teraflops, so thats a bit misleading.

I disagree in that AMD is the choice due to cost and heat. And citing Kepler isn't relevant because it's not out yet, and at the same time Kepler's 3 TFLOPs are most likely going to be better utilized than Tahiti's. Also the info I was using had the 580 at 1.5 TFLOPs. But the difference isn't that big or relevant because you would need two 580s to run Samaritan at 2.5 TFLOPs so that's not misleading. If all consoles end up using AMD (which I expect), none of them will more than likely run Samaritan above 720p. But in the end we're just talking about one companies view of what next gen should be. And while I thought the demo was impressive, not everyone sees it that way and it won't be the direction for most games either.
 
AMD TFLOPs /= nVidia TFLOPs

For example they said it takes 2.5TFLOPs to run it at 1080p. Is that two 580s or one 6970? Considering it took three 580s to run it at supposedly 2560x1440, I think it's safe to say 1080p won't run on one 6970 and likewise 720p on one 6670. But at the same time I don't think IGN's info about the 6670 being in Xbox 3 is complete, but we'll see.


It's "real" FLOPs. It means that in 1 second, at 1920x1080p, the demo needs 2.5 trillion of operations on average. So much more than what a single GTX580 and a 6970 can provide, and more like what Tahiti and Kepler can do. (Giving that Tahiti seems to have an higher efficiency, and Kepler will have a much larger throughput)
I don't think the demo is a good example of what that number of pixel operation can do. If i remember they used an incredibile heavy boken filter, without any much advantage on image quality. The Samaritian demo is just a technology demo for DirectX11 and Unreal Engine 3, and it was put together fairly quickly by few persons, not Unreal Engine 4
 
Status
Not open for further replies.
Back
Top