Predict: The Next Generation Console Tech

Status
Not open for further replies.
It's probably a lot more than that considering how limited it is in functionality - it's just not worth mentioning. For example, Alone in the Dark 5 used it. :p

I think Two Worlds 2 uses it also, it was in one of their papers, well they didn't mention it specifically but they did displacement mapping on the PC and xbox 360 version of the game and from what I can remember displacement mapping requires tessellation yeah?
 
For what if I may ask?:p

Heightmaps. XD

Unreal Engine 3 also has some support for GPU tessellated terrain or surfaces. Gears of War 2 & Batman Arkham Asylum come to mind (fluid surfaces), but I mean, it can be used for just terrain ala Viva Pinata. The 360 version of Mass Effect 1 probably used it for the uncharted worlds as well, but I could never get that confirmed (GPU vs just CPU). There's a UDK page on terrain/tessellation with a bit of info that practically fits Mass Effect 1's usage. :p

I always wondered if it might have been used (or viable) for the Battlefield games thus far, but I'm pretty sure they don't. I guess it's not worth the time.

Is it a case of just the Xenos being weak, or is the implementation itself also a problem?
A combination of both, but the implementation is pretty weak if you just compare the R2VB version under DX9 to the more complete DX11 iteration (hull shaders etc).
 
It's probably a lot more than that considering how limited it is in functionality - it's just not worth mentioning. For example, Alone in the Dark 5 used it. :p
In general Xenos' tessellation is no less functional than DX11 tessellation. The actual tessellation algorithm is different and adaptive tessellation requires multipass, but the functionality is there.
 
Thanks, that fits my prediction that next Xbox will be launched in 2013. Next generation AMD GPU will be in production in 2012 I assume. Thus, if MS continues to use AMD GPU, their next generation system may use 20/22 nm version of those GPUs. I think it is safe to expect at least HD6970 level of GPU performance in next Xbox system.

How much more powerful is that compared to the xbox 360 gpu?
 
How much more powerful is that compared to the xbox 360 gpu?

About 10x more transistors, I'd say it's pretty close to that much more powerful as well.

Much more importantly, how much power does that draw?

at 20nm probably not a whole lot. A possible console version would see some tweaks anyway. 6970 with only slightly lower clocks already draws a lot less.

Personally I think that speculating on PC GPU already on the market going in to a console years before the launch of that console is not smart. Along the last few years I feel like 10 different GPUs have at some point considered to be going in to the next gen consoles. Who still thinks that Xbox next is going to have an AMD 4850, even if the console would have launched this or next year? Nobody, but 2 years ago it was speculated. GPU makers constantly improve their tech and basically the best quess at this point is like Arwin implied a budget based in power draw.

The consoles are going to get a GPU with what they find to be reasonable power draw and up to date tech at launch year of the console. There is bound to be changes in tech compared to the 6970 of today.
 
Last edited by a moderator:
Would a system akin to TurboCore make sense in a console?

I mean, you could take it even further than just overclocking the CPU, too. Like either OC the CPU or GPU to stay within the TDP. Many games have different needs. Some need more CPU, others need more GPU.
 
How much more powerful is that compared to the xbox 360 gpu?


Xenos was 232 Millions transistor chip (182 mm2 at 90 nm process) : 240 GFlops


Cayman is 2.6 billion transistors (389 mm2 at 40 nm) : 1.6 TFlop


Next Xbox GPU at 20nm is anyones guess but I think it is possible to fit >2 Billion transistors in 182mm2 die at 20 nm. It will also be 1.5-2 generations more modern than Cayman so I expect it to be more optimized. It may be better than Cayman in some specs and worse in others but the overall performance should be close enough.
 
I think Two Worlds 2 uses it also, it was in one of their papers, well they didn't mention it specifically but they did displacement mapping on the PC and xbox 360 version of the game and from what I can remember displacement mapping requires tessellation yeah?
There is no polygon level tesselation or displacement mapping in TwoWorlds2, they used pixel shaders for quad tree displacement mapping.
 
Instead of scaling down a power monster PC GPU, how about scaling up a handheld GPU?
For example use 10x NGP model for PSP2. 40 ARM cores, 10 PowerVR cores, all higher clocked of course.
 
Instead of scaling down a power monster PC GPU, how about scaling up a handheld GPU?
For example use 10x NGP model for PSP2. 40 ARM cores, 10 PowerVR cores, all higher clocked of course.


IMO a gaming laptop GPU would be a better approximation. Similar power requirements etc.
 
IMO a gaming laptop GPU would be a better approximation. Similar power requirements etc.
True but mobile GPU's always seem to trail desktop GPU's in terms of development timelines.

Look at the amazingly performance/watt efficient and high performance sandy bridge CPU. Its roots are the Pentium M CPU which was designed from the start with efficiency in mind. PC GPU's are not designed with that in mind, and shrinking them later to make laptop GPU's is always going to be less efficient than scaling up an already low power design, IMHO.
 
True but mobile GPU's always seem to trail desktop GPU's in terms of development timelines.
They don't trail desktop GPUs in development. It just seems that way because it takes time for the laptops to launch.

Look at the amazingly performance/watt efficient and high performance sandy bridge CPU. Its roots are the Pentium M CPU which was designed from the start with efficiency in mind. PC GPU's are not designed with that in mind, and shrinking them later to make laptop GPU's is always going to be less efficient than scaling up an already low power design, IMHO.
The question is will those scaled up GPUs perform as well as the scaled down desktop GPUs and does any performance difference matter to the consumer.
 
About 10x more transistors, I'd say it's pretty close to that much more powerful as well.

Personally I think that speculating on PC GPU already on the market going in to a console years before the launch of that console is not smart.

GPU makers constantly improve their tech and basically the best quess at this point is like Arwin implied a budget based in power draw.

The consoles are going to get a GPU with what they find to be reasonable power draw and up to date tech at launch year of the console. There is bound to be changes in tech compared to the 6970 of today.

This is a sensible post.

The numbers might be big, but the brand refresh will be a more compelling selling point than whatever hardware lives inside. The content lives within the same paradigm, it's just a prettier, bigger, smoother feeling ride (and not anywhere near 10X the ride, imo).
 
http://www.zdnet.com/blog/btl/exclusive-microsoft-looking-to-2015-for-next-gen-xbox-release/46247

Directly on the heels of discovering an internal Microsoft video showing WGX’s vision for ubiquitous gaming across platforms (Xbox, Windows, Windows Phone et al), I have now stumbled upon an incredibly tiny-though-significant morsel from a designer Microsoft has brought on-board to collaborate with their IEB (Interactive Entertainment Business) Design group to investigate future user experience scenarios and hardware for the Xbox circa 2015.
Posted just yesterday (3/20/2011) on the Web portfolio of one Ben Peterson, here is a screen shot of the noted confidential project as it currently resides

xbox-2015-th.png
 
This will probably sound pretty retarded but I'm going to ask anyways.

Would there be any benifit for the consoles to have a form of instant overclock/underclock functionality.

What I mean is I'm not sure how consoles are currently setup enough to know if during low CPU and GPU extensive tasks if those pieces of equipment downlclock themsevles to save electricity/heat.

Using a console to play music, look at photos, surf the web and possibly streaming via netflix or DNLA wouldn't need full use of the consoles power correct? I know these machines are not running off of battery power so the energy savings are not that "important" however anything that reduces the heat these machines generate will help with the lead less solder being used and such.

If they can downclock the processors couldn't they provide a slight and safe version of overclocking during certain aspects to improve performance? While pushing graphics and such might look like the best place to overclock a processor couldn't they delegate other pieces of code to be used in overclocking mode to free up extra time for more extensive processes.

Like I said this question was probably retarded but it's better to ask and get an answer then not ask and never know.

One last thing, why don't these consoles continue running the fans when they are turned off (put in standby mode) after gaming? The heat that comes out of my console right before I turn it off means my processors without a fan are cooling very slowly but the motherboard around it is cooling much faster putting stress on the solder points.

I've gotten in the habit now of letting my console cool itself off before I turn it off in hopes that it will give it extra life for the solder joints.
 
This will probably sound pretty retarded but I'm going to ask anyways.

Would there be any benifit for the consoles to have a form of instant overclock/underclock functionality.

What I mean is I'm not sure how consoles are currently setup enough to know if during low CPU and GPU extensive tasks if those pieces of equipment downlclock themsevles to save electricity/heat.

Using a console to play music, look at photos, surf the web and possibly streaming via netflix or DNLA wouldn't need full use of the consoles power correct? I know these machines are not running off of battery power so the energy savings are not that "important" however anything that reduces the heat these machines generate will help with the lead less solder being used and such.

If they can downclock the processors couldn't they provide a slight and safe version of overclocking during certain aspects to improve performance? While pushing graphics and such might look like the best place to overclock a processor couldn't they delegate other pieces of code to be used in overclocking mode to free up extra time for more extensive processes.

Like I said this question was probably retarded but it's better to ask and get an answer then not ask and never know.

One last thing, why don't these consoles continue running the fans when they are turned off (put in standby mode) after gaming? The heat that comes out of my console right before I turn it off means my processors without a fan are cooling very slowly but the motherboard around it is cooling much faster putting stress on the solder points.

I've gotten in the habit now of letting my console cool itself off before I turn it off in hopes that it will give it extra life for the solder joints.

Games are supposed to be built around a specification and having a turbo mode in a console not only increases the ambiguity of development and how well software performs, but could lead to another RROD fiasco if some piece of software keeps the turbo mode going too long, even in just one core of many. I would highly discourage such an idea, multi-thread your goddamn code if it's overloading just one core/thread.

Controlling the clock speed for non gaming tasks I would think is already implemented though. As pulling 100W just to run a DVD is madness.
 
Status
Not open for further replies.
Back
Top