Predict: The Next Generation Console Tech

Status
Not open for further replies.
Were not living in the '80 anymore though. There is a reason it died out. '80 monitors were large and expensive but these days you can get ''22 flatscreen monitors for under 100 euro's.
 
Regarding CPU power, if you want to do camera capture (Kinect/Move style) at 1080p60 (or even more), you probably need a lot of power to decode that and that code might not be super suitable for a GPU.
 
Were not living in the '80 anymore though. There is a reason it died out. '80 monitors were large and expensive but these days you can get ''22 flatscreen monitors for under 100 euro's.
The reasons for the age of integrated home computers bought for games and productivity dying out are numerous. The point is back then one device used for everything was popular. It's also been popular with PCs. It's also popular in mobile forms on tablets, and even on smartphones. It's even being extended to CE devices like Android TVs, with productivity apps being runable on TVs via web services, though clearly that's early days yet. The notion that people take an active dislike to added value to devices confuses me. Where adding features adds cost, then yes, there's a reason against it. But where the internals of a device contain a processor, graphics, local storage and access to high capacity non-volatile storage, and the device already runs a variety of softwares, extending that comes at little cost.

Not to derail with this a discussion that's been had many times before. Suffice to say multifunction devices are on the increase, proving people are not against versatile CE devices.
 
So, um.. you do realize that the 360 is running Windows, right? Sure, it's a heavily modified, limited windows kernel, but it's still mostly compatible. Most of my tools will compile for either xbox or windows with a single makefile change.

Not really, I can't run the programmes I run on my PC on my xbox. My point was that it was that feature which would be advantageous to end users of having a common ISA.

TBH though it's one of the fringe advantages IMO. The bigger advatange as has already been mentioned is raw performance.
 
Intel's highest end processor at the end of 2005 was the Pentium D and i doubt anyone wanted that abomination in a console.

For power and heat reasons absolutely. I still think both console CPU's are pretty impressive for their size/heat/power. For raw performance though I'm betting most devs would rather have seen a 3.2 Ghz PD in the 360 than Xenon.

AMD had the Athlon X2 @ 2,4 Ghz (with a TDP of 110 W). The Athlon might have a higher performance, than the Xenon, in some fields but has nowhere the same SIMD capabilities.

And what exactly does that SIMD performance translate into in the real world? Let me put it another way, Xenon has more theoretical SIMD performance than an i7 920. Which app/game/anything demonstrates that?


I think i don't need to compare these cpu's with the Cell cause everybody reading here, by now, should know that the Cell is helping the RSX in ways an Athlon X2 or a Pentium D could never have done.

Possibly, it's never been demonstrated or even attempted so who knows? But Cell is halfway to a GPU anyway, hence why it performs so well under those types of workloads. No doubt it helps RSX out a lot but what would have performed better, an Athlon X2 with Xenos or Cell with RSX? If your GPU is good enough in the first place then better to have an good all round CPU than very weak CPU that can also act as a decent prop for a slow GPU.
 
Last edited by a moderator:
Except none of those systems would of touched Cell with Xenos. It's not like Xenos and the RSX are that massively different in size. I kind of doubt they were that massively different in R&D budgets as well. It's simply one turned out to be a better choice then the other. Simply put if it's as expected Sony waited to long to go to outside help on the GPU because they thought they had an internal/toshiba solution that would work.
 
Out of curiosity I wonder what developers would have preferred out of Xenon with an 8800GTX or Cell with an 8800GTX (the 8800 is to remove the GPU limitations).
 
Last edited by a moderator:
I though Xenos was the X360 gpu or was it Xenon? Microsoft seriously needs to dump the confusingly close codenames.
 
The reasons for the age of integrated home computers bought for games and productivity dying out are numerous. The point is back then one device used for everything was popular. It's also been popular with PCs. It's also popular in mobile forms on tablets, and even on smartphones. It's even being extended to CE devices like Android TVs, with productivity apps being runable on TVs via web services, though clearly that's early days yet. The notion that people take an active dislike to added value to devices confuses me. Where adding features adds cost, then yes, there's a reason against it. But where the internals of a device contain a processor, graphics, local storage and access to high capacity non-volatile storage, and the device already runs a variety of softwares, extending that comes at little cost.

Not to derail with this a discussion that's been had many times before. Suffice to say multifunction devices are on the increase, proving people are not against versatile CE devices.

actually home computers were better at running games than consoles, you had commodore 64 and the like versus toys : atari 2600, 7800, coleco, early NES games.
more variety back and you could even technically self-publish games using the stock hardware.

it would only make sense to run a real web browser and a word processor on a next-gen console (or even an uncrippled PS3, under a lightweight desktop such as fluxbox or lxde)

already people buy "computer gear" to run a console on, such as a 22" inch monitor and bad speakers.
this allow homes without a TV set :)
 
For power and heat reasons absolutely. I still think both console CPU's are pretty impressive for their size/heat/power. For raw performance though I'm betting most devs would rather have seen a 3.2 Ghz PD in the 360 than Xenon.

Raw performance of what??
Performance certainly isn't defined by the ISA in any capacity so I'm not sure what you're trying to say here...?

And what exactly does that SIMD performance translate into in the real world? Let me put it another way, Xenon has more theoretical SIMD performance than an i7 920. Which app/game/anything demonstrates that?

Read my previous posts...

Possibly, it's never been demonstrated or even attempted so who knows? But Cell is halfway to a GPU anyway, hence why it performs so well under those types of workloads. No doubt it helps RSX out a lot but what would have performed better, an Athlon X2 with Xenos or Cell with RSX? If your GPU is good enough in the first place then better to have an good all round CPU than very weak CPU that can also act as a decent prop for a slow GPU.

CELL is far from very weak & in typical PS3 game workloads today would piss all over an Athlon X2 with great ease. The parallelism advantages alone (8 available hardware threads vs 2) prove this fact given your typical modern game engine setup (heavily job based execution model for all your heavy-lifting non-rendering (& in PS3's case even rendering) processes...)

It's not all about wide caches & optimized instruction pipe-lining when reshuffling your silicon to offer another 2-4 hardware threads can give you very real large-scale performance gains in almost all practical cases...
 
Raw performance of what??
Performance certainly isn't defined by the ISA in any capacity so I'm not sure what you're trying to say here...?

I'm not linking performance to ISA specifically. We all know x86 actually has a small disadvantage in those terms. But regardless of this Intel/AMD still make the fastest pure CPU's out there.

My specific comment though related to the Pentium D being a faster option than Xenon. Obviously disregarding the fact that it could never have been practically put in a console.

Read my previous posts...

You listed a few theoretical examples where you think SIMD performance is useful. I'm asking you to put that into the context of everything else that goes on in a game to provide some real world examples of where it's resulted in something like Xenon with its apparent SIMD advatage giving superior results to something like an i7 920. I know you probably can't provide evidence since that would be hard to come by anyway but specifcally, what applications/games would you expect Xenon to perform better in?


CELL is far from very weak & in typical PS3 game workloads today would piss all over an Athlon X2 with great ease.

Obviously theirs a lot of power in Cell but it's all down to the SPU's and most of it's spent on tasks that would be better performed on a decent GPU. By very weak I was really referring to the PPE. As I said, if you have a decent GPU in the first place thus negating the need for much of the work Cell does in those PS3 worloads then are you better off with Cell or a good all round CPU?

As a processor to help out with GPU work then Cell certainly is much better than an AX2 and probably in some ways better than modern quads too. But if you don't need that help on the GPU front then is it better than an AX2 for everything else? Certainly I don't think theirs anything on PS3 that couldn't be done on an AthlonX2 powered console with a better GPU.
 
Square Enix: Next gen to bring 'a big leap' in graphics

Stylised games may seem to plateau, but realism will raise the bar, says technology director Julien Merceron.

The next generation of games machines will be driven to new levels of graphics realism, says Julien Merceron, worldwide technology director at Square Enix.

"I think that we're still going to see a big leap in graphics," Merceron told VideoGamer.com at Develop this week. "In terms of technology I think we'll see developers taking advantage of physically-based rendering, physically-based lighting. I think people will take advantage of global illumination, or at least some form approximation of global illumination, so that could have a significant impact on graphics quality.

Physically-based rendering and global illumination are techniques that allow coders to create photo-realistic effects. Both processes are already used in CGI film-making - and this in turn could benefit game developers.


"It's going to enable new forms of art direction, but it's also going to enable deeper convergence between multiple media - being able to share more assets horizontally between movies, TV series and games," said Merceron.

"This means that when you're doing a cartoon, or when you're doing an animated movie, you could think about an art direction for the game that could be far closer [than current tie-ins]. Obviously it won't be the same, because the processing power won't be there, but you can think about art directions being way closer. And you can think about assets being re-used."

But merceron believes that graphical advances will be most evident in games that strive for a realistic appearance.

"There's a lot of room for improvement, and consumers will be able to see that in future graphics innovation techniques," he said. "Now, if you take most of the Pixar movies from the last five to six years… do you see a big difference between one that was released five years ago, and one that was released last year? I'm actually not sure we see a huge difference.

"But if you take a film like Avatar, there's a huge leap in the graphics techniques that are being used and the level of realism. The conclusion I would draw from that is we might end up seeing the difference way more in realistic-looking games, rather than those trying to achieve a cartoony look. At some point, with all these games [that are] going for a cartoony look, consumers might get the feeling that it's plateauing. But for games striving for a very realistic look, it's going to be easy to see all the improvements, all the time."

During the same conversation, Merceron expressed his belief that social networks and similar experiences will play a major role in the future of gaming.

http://www.videogamer.com/news/square_enix_next_gen_to_bring_a_big_leap_in_graphics.html
 
You listed a few theoretical examples where you think SIMD performance is useful.

Theoretical examples?
No I listed several practical examples coming directly from our (& several other) production codebases I'm personally familiar with. There's nothing theoretical about current SIMD usage in video games today. The proof is out there sitting on store shelves and running on peoples consoles/PCs at home...

I'm asking you to put that into the context of everything else that goes on in a game to provide some real world examples of where it's resulted in something like Xenon with its apparent SIMD advatage giving superior results to something like an i7 920.

Comparing a triple core 6+ year old lean & mean PPC chip with a state of the art off the shelf, full-fat desktop CPU (with probably about 2-3+ times the transistor count) is a bit silly no?
Especially if the aim is to try to argue the merits of MS/Sony putting an x86 ISA chip in the next console over an IBM/PPC solution. I thought that was what we were discussing..?

Obviously theirs a lot of power in Cell but it's all down to the SPU's and most of it's spent on tasks that would be better performed on a decent GPU.

In some cases yes however there are lots of tasks CELL excels at when compared to your typical GPU architecture (visibility+occlusion culling for example given the nature of the fact that any solution is typically heavily tied to the bespoke nature of the games render/scenegraph)

By very weak I was really referring to the PPE. As I said, if you have a decent GPU in the first place thus negating the need for much of the work Cell does in those PS3 worloads then are you better off with Cell or a good all round CPU?
That's just it, what exactly would you want your general purpose CPU to do?

- physics & collisions? not an ideal solution for such a task given the limited throughput
- game logic + scripts? literally runs on the main thread with very little overhead in most cases
- AI? probably a good fit yes but most games don't through much at the hardware in this vein due to limited production resources diverted to it (generally AI only needs to be "good enough", which doesn't particularly create a case for resource intensive computation...)

Outside of that pretty much everything else running in your average game will be processes heavily related to rendering in some way. Some maybe more suited to a GPU like architecture, some a CPU core/SPU but overall this category generally takes up the lion's share of your frame.
Now given the fact that most processes in this category are embarrassingly easy to parallelise & it paints a pretty clear picture of what kind of CPU would be a better fit given the same silicon budget (I'll give you a hint.. It's the one with the most parallel execution threads)

I'll repeat myself again when I say In almost all cases you're going to see better performance when you have more scope to do more computation in parallel than having less threads but more optimal instruction pipes & as such, on the same silicon budget a CELL-like chip will always provide that.

As a processor to help out with GPU work then Cell certainly is much better than an AX2 and probably in some ways better than modern quads too. But if you don't need that help on the GPU front then is it better than an AX2 for everything else? Certainly I don't think theirs anything on PS3 that couldn't be done on an AthlonX2 powered console with a better GPU.

When will we ever not need more computational power?
Give us the equivalent of 680 GTX GPUs in PS4 + Xbox720 & we'll always find situations where more horsepower for rendering & rendering-related processes will be useful.
In this vein we'll always be better of with that self same GPU & a CELL-like CPU than switching the CPU out for something that is more PC-like general purpose.

Finally my last point is this...

I think alot of people should remember that the kind of work done in your average game is very different to that of your average PC. As such when you're choosing a microprocessor for your platform you should be thinking about getting the right processor design for the given workload.
Traditional general purpose CPU designs provide a "best fit" approach for getting good performance out of a completely non-deterministic computational flow, processing numerous tasks spread across several applications/processes/services with very little persistent data coherency & continuous rapid context switching.
Games, being glorified simulation models, tend to be more controlled beasts & having the luxury to layout your code at the high-level in order to play nice with the underlying architecture, you have a much more stable computational model which favours persistent processes, continuously churning through/transforming very large streams of data. In this case throughput is key & the more you have, the more data you can transform & the more "simulation" (rendering or not) you're able to achieve.

Coming from this I think it's safe to say as games become bigger and more elaborate in their efforts to increase the parameters of the "simulation" model, we'll need more and more capacity to transform data quickly with the focus being on not specifically pushing through what we have faster, but instead pushing through more of it.

As an aside,
The CELL isn't the first CPU console developers have put to use for "helping out with graphics" & this practise has been there since forever. It was done on PS2 with the EE being used heavily for animation, physics etc, even on PSP with VFPU for similar tasks. I've also heard stories of devs on much older platforms (think spectrum & amiga days) even using the console's audio chips for rendering.
As developers, we will always want to do more with the hardware we have & if we can find cycles to spare in any part of a system we'll try our hardest to make use of them in order to make better games.

I'm ducking out of this thread now but I hope my stance on this makes a bit of sense.
:p
 
I've always thought of the drive to move certain non-rendering tasks to the GPU to be a kind of false economy. Every time you do that you are taking away time the GPU could be using to do the thing it is most efficient at, graphics. GPGPU is really fast at a lot of tasks, but mostly because it's a brute force technique that compensates for inherent inefficiencies by dedicating a lot a execution hardware to a task at once.

I'd rather not we end up in situations where x86 CPUs have been adopted under the assumption that GPGPU means we can de-emphasize SIMD performance on the main processor. What if it means we're stuck at 720p because the GPU is spending so much time doing physics, sound, or whatever? Especially when at the same transistor budget you could probably have a Cell-like design where you're producing 1080p and actually offloading graphics tasks to the CPU without sacrificing anything. What task is so critically important that a Sandybridge is best at that it would be worth potentially giving up 4-8 times the SIMD power?
 
I'd rather not we end up in situations where x86 CPUs have been adopted under the assumption that GPGPU means we can de-emphasize SIMD performance on the main processor. What if it means we're stuck at 720p because the GPU is spending so much time doing physics, sound, or whatever? Especially when at the same transistor budget you could probably have a Cell-like design where you're producing 1080p and actually offloading graphics tasks to the CPU without sacrificing anything. What task is so critically important that a Sandybridge is best at that it would be worth potentially giving up 4-8 times the SIMD power?

This is basically the point I was getting at in my previous posts.

What is it about physics, sound or any of the other things that SIMD is supposed to be well suited for that couldnt be done on a decent x86?

Or put it another way, Xenon has a very large theoretical SIMD advantage over an Athlon X2. So if the 360 had contained an Athlon X2 rather than Xenon, do you expect that certain tasks being perfomed by Xenon today, would have had to be moved to Xenos for performance reasons?

Fair enough if the answers yes, but I think that's the core question. i.e. did the SIMD based focus of Xenon (or Cell) vs more general performance make it better or worse than an x86 processor of the day for the non -rendering associated tasks those CPU's were performing in the consoles.

I say non-rendering because in the case of Cell that clearly was an advantage. But then did all the non rendering based tasks suffer as a consequence? And if the answer in performance terms is no, what is it in terms of developer effort?
 
Status
Not open for further replies.
Back
Top