Predict: The Next Generation Console Tech

Status
Not open for further replies.
60fps is far less likely to come, but 1080p does not sound unreasonable.
People just don't care about high refresh rates anymore. It's the COD games, some racers and GOW that have higher than 30fps, and even some of them may chose to abandon this feature in the future.

1080p on the other hand may help minimize aliasing artifacts of all sorts and with increasing poly counts it'd probably help a bit with the GPU efficiency as well (smaller triangles mean mroe wasted cycles). It's also easier to implement than 60fps, IMHO.
 
Why would Sony and Microsoft settle for less as we can achieve this relatively comfortably on mid-range PC hardware now
I don't understand this line of thinking. You're effectively asking for devs to stick to 2011 visuals for a ~2013-2020 console.

And as far as "5770 being enough for 1080p", well... even that's dubious considering DX11 and more advanced shaders.
 
Shifty, if you aim for a lower res with higher levels of AA next-gen then you would dedicate more hardware for AA efficiency and less brute force. It would change the dynamic of what you ask for from the IHV to design.
Except hardware isn't particularly designed around a certain resolution and AA, and even if the hardware tries to do that (XB360 designed for 720p with tiling and 2x or 4x MSAA) the devs can ignore that design to do other things. The engineers can only provide devs with a balanced set of resources, and the devs chose how to use it, whether to go with more AA and lower resolution, or more framerate and less resolution, or more resolution and less something else. Hence another thread to discuss resolution as engineers look instead to get economical, efficient choices in CPU, GPU, RAM, BW, to hand over to the developers to do whatever they will with.

Personally my feeling is that after this amount of time between this gen and next gen a true 1080p resolution with decent AA and AF is an ideal goal
Personal hopes belong in the other thread.

So lets think of hardware currently available that at a minimum could drive that resolution with enough graphics fidelity to keep the new games looking "next-gen" (global illumination, high res textures, ray tracing, HDR precision etc etc) ....
Right. And then whatever hardware they pick to power your 1080p60 games, devs will use to make 720p30 games that look better in terms of pixel quality. This really is a subject for the other thread, because resolution and framerate isn't dictated by the hardware but by the software choices. ;)
 
Any chance we will see motion interpolation tech implemented in the next gen consoles, on a hardware level (for "pseudo 60fps") ? It's widely implemented in hdtv's nowadays, and I imagine the interpolation should be getting better. Or is it a no go for some reason ?
 
Any chance we will see motion interpolation tech implemented in the next gen consoles, on a hardware level (for "pseudo 60fps") ? It's widely implemented in hdtv's nowadays, and I imagine the interpolation should be getting better. Or is it a no go for some reason ?

60fps in game is generally implemented for reduce latency input. Motion interpolation tech in HDTV is to simulated a more fluid screen, total different goal.
 
Any chance we will see motion interpolation tech implemented in the next gen consoles, on a hardware level (for "pseudo 60fps") ? It's widely implemented in hdtv's nowadays, and I imagine the interpolation should be getting better. Or is it a no go for some reason ?

rekator got it right, but to elaborate more: Motion interpolation always introduces more latency in the output path. For media consumption, this is a complete non-issue -- just remember to delay they audio by the same amount, and the viewer will be none the wiser. For games, this is very noticeable and annoying.
 
60fps is far less likely to come, but 1080p does not sound unreasonable.
People just don't care about high refresh rates anymore. It's the COD games, some racers and GOW that have higher than 30fps, and even some of them may chose to abandon this feature in the future.

1080p on the other hand may help minimize aliasing artifacts of all sorts and with increasing poly counts it'd probably help a bit with the GPU efficiency as well (smaller triangles mean mroe wasted cycles). It's also easier to implement than 60fps, IMHO.

Well when art is excessively expensive it does push the cost/benefit the other way to support a faster frame time because it would probably cost less to implement and the difference between 60/30hz games wouldn't be as pronounced as a console marches up the diminishing returns curve for graphics fidelity.

In saying this, I think since 3D is probably a given I would propose that maybe faster memory would be preferable to more memory especially if fast flash is implemented. In saying that I propose that extra memory is out and fast flash and fast memory is in. I don't think its going to be a 60fps/1080p tradeoff, they can do both and anything to improve overall user experiences is a good thing as Apple has proved many of these supposedly 'intangibles' of hardware are actually tangible to the end user.

Edit: Ooops I think I just caused an infraction. I was reading back on the thread and I missed the mod comments.
*gets down on knees and begs for forgiveness*

Anyway I remember a great piece from Ars about console streaming and how the PS2 was more efficient than the PC platform simply because of how efficiently it operated. Let us not forget this in our pursuit of console perfection because as we look towards the PC for guidance as to what the next generation consoles should be we should take care to remember it isn't the most pc like console which is the best but the one which gives the most performance per dollar in a practical sense. So we shouldn't overlook the value of streaming speed over simply having large dumb caches which are used inefficiently. When one improved streaming then at the same time you'd have to acknowledge that the required memory for the same level of performance isn't as large so at that point more efficient and faster memory solutions will be a better fit for a consoles main use scenarios which is games and media.
 
Last edited by a moderator:
Or even TVs and players. Would be good to offer set-top boxes with PSS games and SonyNET media for added value, and unified hardware would help there.
 
I don't understand this line of thinking. You're effectively asking for devs to stick to 2011 visuals for a ~2013-2020 console.

And as far as "5770 being enough for 1080p", well... even that's dubious considering DX11 and more advanced shaders.
Even so, the 5770 is a 100W part. With probably only one full-node process shrink before the next console generation, that may be the best we can expect to see.
 
Even so, the 5770 is a 100W part. With probably only one full-node process shrink before the next console generation, that may be the best we can expect to see.

I have admired all of your posts(and Alstrong too).

Pardon my intrusion, im agree all of you,but there are much better GPU AMD than 5770 since July,thats the 6990M with 100watts with the 1120 SIMD and still coming 7870 with 1536 SIMD/stream processors and 120watts at 28nm shrinks by the end of this year,perhaps some future version "7870M" customised(pipes more oriented for console porposes) may are possible up to next gen consoles coming in 2013/1014.

http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units

7870 at 28nm with almost same specs of 6970 with only 120Watts.

http://www.nordichardware.com/news/...-new-architecture-and-xdr2-rambus-memory.html
 
Last edited by a moderator:
I'll attempt to predict the next XGPU assuming that die space dedicated to the GPU is close to Xenos, and that it is built with 22 fab process, and must draw less than 150 Watts at peak.

Looking back at AMD gpus that had die space between 250 mm2 and 260 mm2, the candidates are:

10/1/2005 X1800 254mm2 90nm 321 million transistors 500mhz 256mb GDDR3 ~140 Watts 83 GFLOPS
11/16/2005 Xenos ~260mm2 90nm 337 million transistors 500mhz 512mb GDDR3 125? Watts 240 GFLOPS
6/25/2008 4850 256mm2 55nm 956 million transistors 575mhz 512mb GDDR5 110 Watts 1000 GFLOPS
10/22/2010 6850 256mm2 40nm 1700 million transistors 775mhz 1024mb GDDR5 127 Watts 1488 GFLOPS

A rough extrapolation of the next XGPU based on extremely dubious mathematics, namely the squaring of ratios between fab processes, the next XGPU looks like:

Fall 2013 Next XGPU ~ 260mm2 22nm ~5000 million transistors <1000mhz <=4096mb GDDR5/XDR2 <150Watts <~4000 GFLOPS

I am by no means an expert at this stuff, so any help with this crystal balling will be appreciated. :oops:
 
Last edited by a moderator:
The launch 360 drew about 190 ~ 200 W from the wall. Assuming an expensive 80 - 85% efficient PSU you're looking at about 150 - 170 W for the entire system, with the CPU drawing much more power than the GPU.

The figures I saw on the net around 2005 suggested 80W peak for the CPU and 30W for the GPU (possibly not including daughter die). I can't find these figures now, and may have actually dreamt them up, but I remember them seeming to be from officialish sources.

I don't think you'll be getting 120W GPUs in next gen systems!
 
Given how much beefier the heasink for the CPU was (multiple heatpipes, massively higher surface area and most of the airflow from the fans) I don't think there can be any doubt the CPU was dissipating vastly more heat than the GPU. 40W for the CPU would also give it the kind of performance per Watt that even Intel's very, very best laptop parts would have been envious of, and also mean the rest of the Xbox (excluding CPU and GPU) was burning 100W+ plus (or something) which would be rather worrying!
 
80 watts would put it very close to a much larger (219mm2 and 243 million transistors @90nm) athlon 64x2 from 2005 (89watt TDP, probably less in use)

And I think video cards closest to xenos were using much more than 30 watts. x1800 was over 100 (not suggesting xenos is anywhere near 100...just probably closer to 50 than 30).

Anyway it really doesn't matter unless you expect they will be somehow limited to following that split of power, I expect a bit more weight (than even between cpu/gpu) on the gpu for the coming generation. While 120W tdp is unlikely to be on the table for the gpu, I don't think that 75 is out of the question.
 
A64's went up to 110W (even on the Opterons), and MS/IBM were pushing the frequency as high as they could - original specs showed 3.5 gHz for the cpu. Getting performance close to an 89/110W A64 out of an 80W chip wouldn't have been too bad an effort as it was, but considering it was a new chip, very high frequency, rushed and they had no fall back uses for working but out-of-spec processors, I think they did okay. That really is quite a chunky heatsink on the CPU, and we know MS were cutting it thin with the cooling as it was. There's no way GPU heat output was coming close to CPU heat output. Even with the emergency heatsink on the self-destruct-o-thon GPU it still had far less cooling.

I expect more power to go towards the GPU side of things next generation, especially if Llano is any indicator of the shape of things to come (and I hope it is).
 
Status
Not open for further replies.
Back
Top