Predict: The Next Generation Console Tech

Status
Not open for further replies.
Given that the original X360's power brick was rated at 203W then yes, I think it did.

Some things wouldn't fit very well on a console like 6GB GDDR5 on a 256bit bus, and 8 cores w/ 4 threads each would be a bit of an overkill. Sacrificing CPU cores in order to switch to GCN could make more sense IMO, since I think developers would rather keep all the visual processing in the GPU.
Four 4-threaded OoO PowerPC cores at 4.5GHz would probably be better, though we're yet to see what frequencies the 28nm HP will allow at a reasonable power consumption.

Yea perhaps 6 cores was overkill, but i was thinking in terms of 1 cores redundant for system & Kinect 2...but perhaps with that much general processing power such things shouldn't be worried about.

My gpu specs fall under what...300mm?? if that? in that case you could bolt on 128mb edram unified and say drop the main ram down to 4gb...

I really dont think we are going to see GNC, its got too much GPGPU nonsense on that takes up die space=cost, Anand has already stated that vliw 5 is best for most gaming scenarios..and the 6870 refines it further to chop out FP64..what ever that means..so 6870 is the best gaming hardware pound 4 pound don't you think?
 
GPGPU isn't nonsense since it gets more perf/watt and perf/transistor than CPUs for post-processing effects and physics. Eventually, it could also do sound processing and A.I. assist.



VLIW5 is more space-efficient given current games which were made with DX9c capabilities from the start. You can't design a 2013/2014 console based on the games made for 2005 hardware.
 
GCN brings multitasking features, making GPGPU scenarios more likely. wouldn't it be nice to have.
yes square millimeters are expensive but we will be constrained by power use and bandwith.
 
I really dont think we are going to see GNC, its got too much GPGPU nonsense on that takes up die space=cost, Anand has already stated that vliw 5 is best for most gaming scenarios..and the 6870 refines it further to chop out FP64..what ever that means..so 6870 is the best gaming hardware pound 4 pound don't you think?

I think we could see a GNC-light without support for double precision. It seems to me that AMD went pretty far to increase their support for GPGPU for scientific computation, but I don't believe you need 64-bit support for gaming.

Doing a 64-bit float multiply accumulate is massively more complicated than a 64-bit integer multiply accumulate, even reusing existing hardware for 32-bit floats can add plenty of overhead, so I expect a hybrid architecture that will get the benefits of both of GCN and VLIW5.

I really hope MS does provide developers with a very compute capable GPU because I think we could really see some creative uses both in and out of game.
 
GPGPU isn't nonsense since it gets more perf/watt and perf/transistor than CPUs for post-processing effects and physics. Eventually, it could also do sound processing and A.I. assist.

It may surpass traditional cpus at physics, but what about stream processors like the old ageia physics card or similar cell in the real world not theoretical? How do they compare in terms of realtime physics?
 
GPGPU isn't nonsense since it gets more perf/watt and perf/transistor than CPUs for post-processing effects and physics.
As long as your algorithm maps to nearly non-existent data storage on the GPU and can be streamed, sure. When you need a few tens of kB's of data per "thread" GPU won't really work all that well.
 
I really dont think we are going to see GNC, its got too much GPGPU nonsense on that takes up die space=cost, Anand has already stated that vliw 5 is best for most gaming scenarios..and the 6870 refines it further to chop out FP64..what ever that means..so 6870 is the best gaming hardware pound 4 pound don't you think?

Current engines don't use much of the capability of modern graphic architecture.
The ability to use the GPU in new ways will lead to new solutions to well known problems in the graphic workspace. More than that, if Microsoft and Sony want their next-generation console to last longer, this is good way to make them "future-proof".
I'm pretty sure that Microsoft will want a GNC graphic card in their next console. They are investing in GPGPU technology, with both DirectCompute and recently with C++ AMP.
In this light, I don't think it's a surprise to see that GNC supports C++.
The new technology demo for the 7900s series is just an example of the potential of GPGPU related to graphic tasks. The demo uses a traditional forward render, but the lighting is done through a compute solution, to achieve an impressive result: nice lighting together with complex material. And this is just scratching the surface of what is possible with next-generation architecture.
 
GPGPU on a console should be tailored closely for graphics needs, like many effects and rendering methods that would otherwise be too expensive to be done via the traditional graphics pipeline. Anything more exotic will simply be a waste of silicon space, IMHO.
A well designed CPU and its ISA should be able to provide enough for everything else, incl. rich gameplay physics, AI, sound processing & etc. Simple non-graphics workloads like pure effect-only physics (particles, debris, etc.) could still fit in the GPU compute domain, for faster processing.
 
It's not as simple as that. If a GPGPU can produce more usable physics and AI results in a given amount of silicon than CPU , then it makes sense to go very minimal on the CPU and focus more heavily on the GPU. I don't think we're at that point yet, and devs may be most displeased to have to turn much of their code into GPGPU stuff. For the sake of development I'd recommend a good, normal CPU. But if the fdevs can be shielded from the effort of GPGPU coding and it yields results, then it should be picked as the more efficient option.
 
It's not as simple as that. If a GPGPU can produce more usable physics and AI results in a given amount of silicon than CPU , then it makes sense to go very minimal on the CPU and focus more heavily on the GPU. I don't think we're at that point yet, and devs may be most displeased to have to turn much of their code into GPGPU stuff. For the sake of development I'd recommend a good, normal CPU. But if the fdevs can be shielded from the effort of GPGPU coding and it yields results, then it should be picked as the more efficient option.

Maybe i was too strong on my wording..calling GPGPU abilities as 'nonsense' was a bit harsh.. and if i were designing a pure future proofed gaming machine that retails @ £500 then i wouldn't have said that.

My thinking was along Shifty's lines, that in terms of effieciency and enabling devs to make decent games from the get go, at reasonable cost then maybe a high powered, traditional setup would be better for the short term, you could get much more computing power from your die space with a VLIW 5 setup, and that could still be used for physics, and GPGPU capabilities anyway..just not as well.
 
Doing a 64-bit float multiply accumulate is massively more complicated than a 64-bit integer multiply accumulate,
Given that an N*N bit multiply is O(N^2) and that the mantissa of a 64-bit float is about, IIRC, 53bits, is it really massively more complicated?

Of course, the addition in the MAD vs FMAD does complicate matters, as does handling IEEE754 special cases, but surely the difference in the multiplier array must make up for a lot.
 
It's not as simple as that. If a GPGPU can produce more usable physics and AI results in a given amount of silicon than CPU , then it makes sense to go very minimal on the CPU and focus more heavily on the GPU. I don't think we're at that point yet, and devs may be most displeased to have to turn much of their code into GPGPU stuff. For the sake of development I'd recommend a good, normal CPU. But if the fdevs can be shielded from the effort of GPGPU coding and it yields results, then it should be picked as the more efficient option.

Yep, Vectorizing code is nor easy or cheap. I remember from the Tim Sweeney presentation on the future of graphic that the estimated cost for heavily based GPGPU game engine where 10 times higher than the current pipeline.
Future console will still need faster single-threaded performance.
 
Given that an N*N bit multiply is O(N^2) and that the mantissa of a 64-bit float is about, IIRC, 53bits, is it really massively more complicated?

Of course, the addition in the MAD vs FMAD does complicate matters, as does handling IEEE754 special cases, but surely the difference in the multiplier array must make up for a lot.

Sorry that's what I actually meant. In the past, the complexity I've run into was due to the need to normalize the results and manage the exponent during the accumulation. This work was for a custom DSP in which we wanted single cycle FMAD throughput, the adder turned out to be the slow piece of the design and needed the most work, the multiplier was trivial.
 
http://www.gamesindustry.biz/articles/digitalfoundry-in-theory-can-wii-u-offer-next-gen-power

Digitalfoundry causing global outrage on the Internets with Wii U predictions.

But they are right. There is no way Wii U is even close to 360S TDP without sounding like a hoover worse than launch 360. I predict very low clocks

I think I'll wait till E3 to see if Nintendo show a redesign that is closer to 360S in size.

If the size is the same as what they showed last year, then I fear you may be right....

Nintendo are good at designing a console but even they can't perform a miracle.
 
It's not as simple as that. If a GPGPU can produce more usable physics and AI results in a given amount of silicon than CPU , then it makes sense to go very minimal on the CPU and focus more heavily on the GPU. I don't think we're at that point yet, and devs may be most displeased to have to turn much of their code into GPGPU stuff. For the sake of development I'd recommend a good, normal CPU. But if the fdevs can be shielded from the effort of GPGPU coding and it yields results, then it should be picked as the more efficient option.

Would not the decision of GPU heavy and CPU light vs a moderate CPU as you ask for be a decision based on the projected console life?
 
OK I won't discuss further +250 Watts TDP chip, neither the fact that you could hit the limit size of the optical reticule, etc. I had a nice post I could not finishing before goimg out yesterday, my wife has turned off the computer I won't go there again, searching links, data die shots, etc.\

I agree to politely disagree :)
 
Anyway you dont need graphs to see the truth, ALL consoles manufacturers uses IBM PPC last gen, and ALL consoles will use them again this gen, outside of desktop where other things come to play such as software, POWERPC makes total sense, else why would they all use it?

I'm sorry but I'm still seeing no evidence of these claims. Console manufacturers have more than just perf/mm and perf/watt to consider. Microsoft went with Intel for the first Xbox and didn't regret it at all from a power point of view. It was the lack if IP ownership that hurt them.

From a performance per watt perspective, and performance per mm die, POWERPC dominates, i think only mips beat it and this new UPU uarch from ICUBE.

Again, what's the evidence for this? What's the metric you are using to measure performance? How relevant is that to a games console? I find it very hard to believe that a PPC of similar power draw to say an arm CPU would perform as well. And similarly I'd like to see the PPC that's the same size as a Sandybridge offering anywhere near the performance.

What PPC architecture are you even using as a basis for comparison? Because I think most people agree the PPC cores in Xenon and Cell suck pretty badly.
 
Breaking...

ARM has announced a new ARMv8 A 64-bit core - Atlus

http://www.eetimes.com/electronics-news/4235556/ARM-tips-gods-giants-roadmap

ARM has chosen a classical giant and a god for the names of its next-generation high-end processing cores. Atlas and Apollo will implement the ARMv8-A architecture which supports 64-bit computing. The chips are being designed for likely implementation in 20-nm manufacturing process technology and to address applications from servers down to smartphones. Again lead partners are signed up ARM said.

The linking of the names as Atlas/Apollo could again suggest that Atlas is intended to be big and Apollo the little in big-little pairing. "We expect Atlas, Apollo to come into volume in 2014 and at that time it will be a 20-nm world out there."

Finally a console worthy ARM 64-bit core that is a game changer in the whole ARM in a console discussion.
 
http://www.gamesindustry.biz/articles/digitalfoundry-in-theory-can-wii-u-offer-next-gen-power

Digitalfoundry causing global outrage on the Internets with Wii U predictions.

But they are right. There is no way Wii U is even close to 360S TDP without sounding like a hoover worse than launch 360. I predict very low clocks
No matter how I looks at it I've a growing feeling that the enthusiasts here will be disappointed by the next box specs. Proper benchmarks or not, numbers pull out of one's ass or not, the all thing is wrapped into a noise that doesn't scream "awesomeness".

As for the article, well it's close to full of shit imho.
1) it's unclear if the WiiU form factor is definitive.
2) Say the form factor doesn't move, what is in the WiiU? Does Nintendo need room for a HDD? which kind of drive do they use? They speak about it but then doesn't take it in account...
Below is a photo of the last xbox 360 revision:
DVDtop.jpg


That's from Anandtech by the way. It's pretty obvious how few of the total volume is taken by the chip the RAM and the cooling system.
Overall the 360s is a very sturdy design, compare it to the Wii and the difference is clear.
The optical drive in the Wii is ~1/3 of the one the 360 (maybe less...), there is no hdd, etc.
As 3Dilletante said They should have tried to do a comparison "ceteris paribus". We don't know what's in the WiiU to begin with.

Then the last 360 pulled north of 90 watts, it's possible to do more from that power budget, that's sure a lot more, it's possible to do twice (assuming we're speaking of the GPU mostly) within a lesser power envelope. It would have been interesting (for them I did my homework at least...) to look how power consumption varies with clock speed for CPU and then for GPU.
The comparison for CPU is even more telling as they are operating on a wider spread of frequency (south of 2GHz to north of 3GHz) you can get close to +200% figures.

Granmaster should better have make a point about the impact of the highly clocked CPU on nowadays systems power consumption and how two shrinks and a redesign did not saved the day instead of spreading that FUD :smoking:
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top