Predict: The Next Generation Console Tech

Status
Not open for further replies.
Sure they do. each time the chips would enter a process revision the power and cooling required would decrease. Its how its been doing this gen so far. Its how its allways worked.
The promised power savings at equivalent circuit performance has been in the ballpark of 20%, while transistor budgets have increased by much more.

Many massive OoOE cores and 80 high-activity vector processors scaled at that rate would be 20% off of stupendously huge.
 
And that's after removing OoOE...

Just thinking in terms of the Xbox 360, given how well Xenon is holding up against Core 2 and A64 X2 processors in games like Left4Dead and GTA4, I think it's hard to say MS would be making a mistake if they left it out again ...

(... and if it leaves you with more juice to power a big fast graphics chip, that's a bonus).
 
Just thinking in terms of the Xbox 360, given how well Xenon is holding up against Core 2 and A64 X2 processors in games like Left4Dead and GTA4, I think it's hard to say MS would be making a mistake if they left it out again ...

(... and if it leaves you with more juice to power a big fast graphics chip, that's a bonus).

GTA4 is just rubbish porting. No other game shows that level of CPU requirement on the PC, especially no other console port. Left4Dead runs extremely well on a modest 2.4Ghz C2D.
 
GTA4 is just rubbish porting. No other game shows that level of CPU requirement on the PC, especially no other console port. Left4Dead runs extremely well on a modest 2.4Ghz C2D.

I agree in general though L4D is probably a very poor reference. If anything I would point to DMC4, Fallout 3, Oblivion, or perhaps Lost Planet (not that it was a well done port). The problem with a comparison is that Im not aware of any title that is overly similar to GTA 4 in scope.
 
I agree in general though L4D is probably a very poor reference. If anything I would point to DMC4, Fallout 3, Oblivion, or perhaps Lost Planet (not that it was a well done port). The problem with a comparison is that Im not aware of any title that is overly similar to GTA 4 in scope.

Yeah I was just responding to functions mention of L3D as an example of Xenon fairing well against PC CPU's. But given that pretty modest CPU's seem to handle it no problem (in my experience at least) as long as you have the the graphics to back it up then I don't think it its a particularly valid example.
 
GTA4 is just rubbish porting. No other game shows that level of CPU requirement on the PC, especially no other console port. Left4Dead runs extremely well on a modest 2.4Ghz C2D.

There's no evidence I'm aware of that GTA 4 is a rubbish port on the PC - on the CPU side at least. The game is doing a hell of a lot, and a huge amount of development (with help from MS Ninjas) has gone on. Just as expected, time spent developing for the 360 is paying off. We know the game benefits from at least 3 cores on the PC, so it's not like they've broken threading, or broken caches (much bigger on the PC), or somehow turned off OoOE and branch prediction.

Valve do indeed list a Core 2 at 2.4gHz as recommend for L4D.

Ubisoft list a 2.2 ghz core 2 as recommended for Asassins Creed (and there's a lot less going on than in GTA4).

As developers have gained experience with Xenon, I'm not suprised to see PC requirements for similar play experiences creeping up. The 360 is still more powerful than the base system that many PC games target.

That Xenon can run with the Core 2 despite being a year older and far smaller shows that huge cores (with expensive OoOE units and huge caches) might not be the best way to spend your silicone and your Watts when you have the luxury of creating a new platform specifically to run games created for that platform.
 
*Just a general reminder to please keep the PC comparisons to a bare minimum here and on topic. Thanks.*
 
Do we really need a top of the line GPU architecture for the next gen consoles? That is the question i asked myself after seeing the new Uncharted 2 screens. Those highly multisampled screens are all but ingame and it takes only a feeble 7800GT'x' to run them. Imagine we sub the 7800GT'x' with a GTX280, what would we get? 1280x1080 at 2xAA/8xAF and 30fps to quell all IQ questions on HDTV. Obviously we will have more power for 128bit HDR and a fine granuity faked raytraced lighting model. There should be some more left to bump the geometry of important models. Now imagine a game looking like that, it is not Toy Story and would not match the console going for top of the line GPU, but i think i am all but ready for that kind of next gen graphics. How many more full modelled distanced trees do we really need?

The trick for next gen will be synthesizing the game world to fully react like the real world, or reel world for exaggerated effects. I want every shots shattering every leaves, every grenade to leave a small crater-killing off nearby living things, every rocket blasts to bring colleteral damage, every enemies to cower in fear of danger.

I am revising my prediction. I want the Cell cpu to take a larger stage. 4Ghz triple pumped floating point on the SPUs drawing from 2GB of XDR2 ram. The GPU can be a cool 32nm GTX280 class, 256bit of 1GB GDDR5 would suffice my graphics needs. After Killzone2, i am a believer of post-processing goodness. I hope to see more Cell2+RSX2 synthesis with a pumped up FlexIO bandwidth.
 
*Just a general reminder to please keep the PC comparisons to a bare minimum here and on topic. Thanks.*

Yeah, wasn't meaning to get into a distracting line of discussion, was just thinking about how a lack of OoOE (and other stuff) was seen a a big loss for Xenon, and perhps it was. Given enough raw power, enough talent and enough time and money it doesn't appear to have stopped a high level of performance being extracted from it though.

I'd assumed until recently that with more time to develop the cpu for xbox 3, MS would go for all the bells and whistles that Core 2, A64 et al have. Maybe this wasn't a great assumption.

Does anyone have any idea as to the kind of things that IBM could have bunged onto the Xenon cores to speed them up, and the kind of impact these things would have had on die size and power consumption? How would things look for a next generation CPU?
 
Given enough raw power, enough talent and enough time and money it doesn't appear to have stopped a high level of performance being extracted from it though.

The immediate, obvious counterexample is the Cell. Having a ton of horsepower that can only be extracted with a ton of work doesn't seem something publishers are really that willing to deal with.
 
There's no evidence I'm aware of that GTA 4 is a rubbish port on the PC - on the CPU side at least.

Its quite simple as far as I see it, no other game comes even close to stressing the CPU like GTA4. Those other games you mention with high recommended requirements all breeze by on my own 2.4Ghz dual Conroe. GTA4 on the other hand struggles, so unless its the only game pushing Xenon so hard on the 360 and pushing it a lot harder than any other game thats made it to the PC then it must be the fault of the port.

And I can't imagine that 3 years after its release, a console isn't having its CPU pushed to the limit in lots of games.

I really don't see what GTA4 is doing on the CPU that requires so much power, San Andreas has as much going on, simply with worse graphics. Or look at whats being done CPU wise in high end RTS's. And I hate to bring it up but not even Crysis with all its interactive foliage, physics and relatively intelligent AI pushes the CPU like GTA4.

Sorry if this seems to be going off topic but I think its relevant because to assume that Xenon is as good as a fairly high end dual core OoOE CPU based on one game and hence conclude that a Xenon derivative, or at least another "light weight" CPU without OoOE is a viable alternative for next gen consoles is a faulty assumption IMO.

Yes Xenon performs at least as well as a 2.4 Ghz dual Conroe in GTA 4, but you've gotta consider what could have been done if that Conroe was in the 360 with all that MS optimisation that the 360 benefited from.
 
Actually, 3 years isn't anything like long enough to have mastered writing multi threaded software.

More cores can - and sometimes does - pay off compared to fewer, more complex cores. This is why we are moving to increasingly multi-core. When you can afford to be less concerned about single threaded performance, and more concerned about overall performance, your CPU design considerations will change.

A relatively slow Phenom X3 can surpass Xenon in GTA 4. A dual core struggles to. Why this happens is pretty obvious IMO, and it doesn't mean either processor, or platform, or the game, is bad.
 
And here I thought it was because transistors are still shinking but core microarchitecture itself has run into power and speed walls.

To answer in kind: you should probably have thought about it just a little more then!

If larger, more complex cores could lead to the same benefits that increasing the number of cores could, we'd still be seeing all the extra transistors being thrown into caches, dynamic branch predictors, whatever, etc rather than multiple cores.

AMD and Intel had different ideas about when they should do this. STI, with Cell, had a different view again.

You seem to be confusing the issue of making cores faster and more power hungry (for increasingly small returns) with the issue of making cores larger and more complex (for increasingly small returns). We were talking about the issue of core complexity vs core number [edit] - I imagine the issues you run into with running them faster (power and speed walls) will be similar for both [/edit].
 
Last edited by a moderator:
To answer in kind: you should probably have thought about it just a little more then!

If larger, more complex cores could lead to the same benefits that increasing the number of cores could, we'd still be seeing all the extra transistors being thrown into caches, dynamic branch predictors, whatever, etc rather than multiple cores.

AMD and Intel had different ideas about when they should do this. STI, with Cell, had a different view again.

You seem to be confusing the issue of making cores faster and more power hungry (for increasingly small returns) with the issue of making cores larger and more complex (for increasingly small returns). We were talking about the issue of core complexity vs core number [edit] - I imagine the issues you run into with running them faster (power and speed walls) will be similar for both [/edit].

Basically, your argument is in favor of explicit parallelism over speculative execution. My point is that it's not a zero-sum game -- it's possible to achieve substantially better single-thread performance than Xenon offers without going overboard on complexity in a single-core.
 
Basically, your argument is in favor of explicit parallelism over speculative execution. My point is that it's not a zero-sum game -- it's possible to achieve substantially better single-thread performance than Xenon offers without going overboard on complexity in a single-core.

Personally, I'm in favour of better single threaded performance as I think hardware should try and support the general case (run as well as possible with the widest range of games) and lower the bar for entry for as many developers as possible.

From MS's perspective I think Xenon was the right choice though. It hasn't held the platform back relative to its competitors (in general stuff performs well enough), marquee titles have extracted a high level of performance (and things will undoubtedly continue to improve), it's smaller than an FX-60 and they could customise it and own the IP. I could quite understand if MS chose a similar path next time.

What would you see as the right balance next time round?
 
One thing I am more and more convinced of is especially in this uncertain economy, and because process shrinks are also becoming uncertain, we are going to see more conservative hardware relative to the cutting edge, next time. And I'm also nearly positive we'll see debut at 299 price point, where the hardware will probably already be break even or close to it.

I also think this economy may lead all parties to hold onto their current systems ever longer. Introducing expensive new tech in this climate is suicide.
 
.... and because process shrinks are also becoming uncertain,

The price per transistor will continue to drop after process shrinks stops. The price of a chip is primarily related to the capital cost of tooling a fab. Once you don't need new tools anymore, you see longer amortization periods and hence falling cost on existing processes.

Btw. I agree with all your points, focus will be on cost next gen, - and on selling services through online infrastructure from day one, movies, music, games etc.

I still think MS and Sony alike will aim for a $399 price point the first year. First year the sell to early adopters and gamers. However a clear strategy for hitting the $299 price point second year and $199 two years down the line will be very important.

Cheers
 
A relatively slow Phenom X3 can surpass Xenon in GTA 4. A dual core struggles to. Why this happens is pretty obvious IMO, and it doesn't mean either processor, or platform, or the game, is bad.

Yeah I agree the games poor performance is without a doubt because its looking for 3 primary threads from the CPU and has to "fit" into 2 on a dual core.

The recommended specs are a dead give away for this - quad from Intel but only triple from AMD. 4 cores are actually overkill for the game.

The reason I see the game as a poor port however is actually because this is the case. Most PC's have 1 or 2 cores so the game should have been developed to run on two primary threads, not 3. I don't doubt for a second that a dual conroe could handle whats going on in this game if it had been designned from the outset for two cores rather than 3.

It seems as though they just shoe horned the 360's code into the PC without any effort to make sure it can run ok with only two threads. I realise the requirement for 3 threads is probably pretty fundamental to the engine and hence its unrealistic to expect it to have been re-written with 2 threads in mind, but that doesn't stop it being poorely optimised for dual cores.

I guess you could argue that with 3 or more cores the game is optimised just fine for the PC :smile:
 
pjbliverpool said:
Most PC's have 1 or 2 cores so the game should have been developed to run on two primary threads, not 3.
That's a lot easier said then done. Having some very current experience with taking multicore console-codebase over to less/single-core PCs, the kind of undertaking you ask for can be comparable to rewritting most of the game from scratch (depends on specifics of the codebase, but still).
The fact that (windows)PC is probably by far the most horrific platform in existence to work on when it comes to multithreading games (both by fault of inadequate tools and OS) doesn't help with these kind of ports either. Which does agree with your final point though, it's entirely possible that the processing load is nothing special and majority of the time is wasted on bad thread scheduling and similar issues.
 
Status
Not open for further replies.
Back
Top