Predict: The Next Generation Console Tech

Status
Not open for further replies.
We seem to have gone from accepting that Microsoft and Sony will not leap forward in tech terms that much (due to pricepoint) and have gone back to insane tech specs.

GTX295 is the first card to reach 1080p60 on Crysis (on DX9 any way) at maximum settings and surely that is hugely over-engineered compared to what we can expect from the GPUs in the next gen?

I"ve said it before but the geforce gtx 280 was 65nm at 1.4b tranistors. If we are looking at a 2011 luanch for a xbox or ps4 then we will hae at least 40nm (as gpus are going to start apearing this year on that ) or possibly 32nms.

http://www.anandtech.com/GalleryImage.aspx?id=5499

Also as you can see in dx10 mode at 1920x1200 the 4870 1 gig is only 13.8 frames behind the gtx 295 and its 500m or so transistors smaller.

http://www.anandtech.com/GalleryImage.aspx?id=5503

left 4 dead at 1920x1200 with a 4870 gives you 77fps with 4x fsaa . Cod:WW gives you 50fps at that res and aa settings.

With the exception of crysis i doubt any of these engines were designed with the pc's strengths in mind as they are console ports .

Give us another 2 years or so of inovation and dx 11 which should hopefully improve performance and ease of coding for many of the dx 10 features and of course newer micron processes and I don't think we are looking at insane specs .

I think later this fall when we see the first dx 11 cards come out we will get a real feel for what next gen games will be capable of. I think we can all agree that for the gpu sony and ms will likely go with nvidia and ati and will go for a dx 11 gpu esp with a 2011 time frame.
 
We seem to have gone from accepting that Microsoft and Sony will not leap forward in tech terms that much (due to pricepoint) and have gone back to insane tech specs.

GTX295 is the first card to reach 1080p60 on Crysis (on DX9 any way) at maximum settings and surely that is hugely over-engineered compared to what we can expect from the GPUs in the next gen?

You have to realize how RIDICULOUSLY unoptimized Crysis is as well though.

You will not need anywhere near the power currently required to run Crysis at a given speed on PC, as you will to run a game that looks as good (or Crysis itself) on a potential future console.

Heck, it naturally falls short in certain areas, but as a total package I think KZ2 challenges Crysis right now, on a lowly 7800GTX equivalent. Other games like RE5, Gears 2, Uncharted 2, FF13, Rage, and COD4 also look amazing, again on 7800 class cards in current consoles roughly.

Conversely, if the next gen consoles only just get a GPU equivalent in raw power to one that can run Crysis well on PC's today, the late gen games those consoles push out will likely look on the order of 2X or more as good as Crysis..which is a kind of scary thought.
 
I was actually thinking of the current Xbox 360 CPU, with the same pipelines, ISA, etc., possibly bumped to four cores, likely with a larger cache.

I have a gut feeling that MS won't increase the number of cores beyond four, since their own research concluded that, "for the types of workloads present in a game engine, we could justify at most six to eight threads in the system (Jeff Andrews and Nick Baker, "Xbox 360 System Architecture", IEEE Micro, March-April 2006, p. 35).".

I would like to point out that if MS said "Our extensive research found there was no way to justify more then 48 GPU/VPU fragment shader cores in game engines" I am sure no one would buy it. Another way of putting it would be that it would not be very wise to tell developers "we found that you definitely could use more of (insert whatever you like) but we just didn't give it to you." We mustn't forget that at some point MS also thought 256MB of ram would be "sufficient" to meet the needs of those who would be developing for their platform.

I will repeat that so long as you must still produce parallelized code the number of cores is largely irrelevant to the complexity of your task. At least on the scale of a single console it is. Getting fewer cores won't magically make the task any easier. That's a pipe dream. A nicer core architecture will allow you to focus more on the task of parallelizing code but it will do little to nothing in completing that task for you.

The focus must be on better tools, design and instruction or there will yet again be a lot of long faces and finger pointing next round. Giving everyone less (relative to what you're claiming to be selling) is a good way to turn up the heat on everyone while not taking any real initiative to solve the problems that will surely be compounded in the next round if ignored.
 
I was actually thinking of the current Xbox 360 CPU, with the same pipelines, ISA, etc., possibly bumped to four cores, likely with a larger cache.
.

That would not be cost effective nor would it give you any interesting performance, therefore unless they find backwards compability to be the most important thing (i doubt it) i reckon they will go for something that will give you more bang for the buck.
 
I would like to point out that if MS said "Our extensive research found there was no way to justify more then 48 GPU/VPU fragment shader cores in game engines" I am sure no one would buy it. Another way of putting it would be that it would not be very wise to tell developers "we found that you definitely could use more of (insert whatever you like) but we just didn't give it to you." We mustn't forget that at some point MS also thought 256MB of ram would be "sufficient" to meet the needs of those who would be developing for their platform.

AMD papers for Fusion also say something like that (up to 8 cores IIRC) even ith only 10% of serial code, but it is mostly on a price/performance point (ence the acelerators). So there is probably better ways for getting CPU performance than just add more cores, and given that most todays CPUs would be better anyway, I really dont think that it makes much sence to keep working with something that inst a real upgrade to Xenon, not just more cores.
 
I am saying the focus on core count is misplaced.

AMD is thinking about their normal user base which isn't game developers. Likewise, I could find papers extolling the virtues of multiple cores.

That's not the point. Simply adding more cores shouldn't be the point. Just upgrading the cores shouldn't be the point.

The point should be much more encompassing than that and it needs to be clear what the real problems are and what will do anything to solve them or limit their affect on game development. More cores is not a panacea. Less cores in not a panacea. Better cores is not a panacea. None of these things in isolation will solve anything at all.

I think we largely can guess what will happen with the hardware but frankly that only begins to address the technological needs of the next round. I am not for more of everything at any expense. I am saying meet what you're marketing with the hardware and then put as many people as possible in position to take advantage of it by any means necessary. If this isn't done watch in awe as history needlessly repeats itself...again.

In my opinion there is a lot more to be done AFTER the hardware is finalized than before...and that's before developers receive their shiny new kits.
 
Last edited by a moderator:
Isn't DX11 coming out this year with Windows 7? I'm thinking DX12 or some DX11/12 hybrid on the nextbox.

Yes the software will be out this year on vista and windows 7. hardware will be out in the fall apparently.

I'm not sure if 2011 will be late enough for a dx 12 verison to be out. It would only be two years from the launch of dx 11. But i can certianly see a hybrid version. Like the xenos having some elements of dx 10 in it.
 
What is DX11 and 12 hardware going to offer us? Is there going to be hardware acceleration of features, or will DX11 and 12 be focussed on more open, programmable hardware?
 
In my opinion there is a lot more to be done AFTER the hardware is finalized than before...and that's before developers receive their shiny new kits.

Good tools are really needed, but they will vary a lot depending on the HW.

What is DX11 and 12 hardware going to offer us? Is there going to be hardware acceleration of features, or will DX11 and 12 be focussed on more open, programmable hardware?

http://www.anandtech.com/video/showdoc.aspx?i=3507

For DX12 I think that for now any answer is true;)
 
What is DX11 and 12 hardware going to offer us? Is there going to be hardware acceleration of features, or will DX11 and 12 be focussed on more open, programmable hardware?

The biggest thing may be the tessellator

Now don't let this technical assessment of fixed function tessellation make you think we aren't interested in reaping the benefits of the tessellator. Currently, artists need to create different versions of their objects for different LODs (Level of Detail -- reducing or increasing complexity as the object moves further or nearer the viewer), and geometry simulation through texturing at each LOD needs to be done by pixel shaders. This requires extra work from both artists and programmers and costs a good bit in terms of performance. There are also some effects than can only be done with more geometry.

Tessellation is a great way to get that geometry in there for more detail, shadowing, and smooth edges. High geometry also allows really cool displacement mapping effects. Currently, much geometry is simulated through textures and techniques like bump mapping or parallax occlusion mapping or some other technique. Even with high geometry, we will want to have large normal maps for our lighting algorithms to use, but we won't need to do so much work to make things like cracks, bumps, ridges, and small detail geometry appear to be there when it isn't because we can just tessellate and displace in a single pass through the pipeline. This is fast, efficient, and can produce very detailed effects while freeing up pixel shader resources for other uses. With tessellation, artists can create one sub division surface that can have a dynamic LOD free of charge with a simple hull shader and a displacement map applied in the domain shader will save a lot of work, increase quality and improve performance quite a bit.

Sounds like the cost of art could actually go down from this generation to next generation if these features are taken advantage of. I know the 360 has one of tehse units in it , however the dx 11 unit seems to have been expanded and of course a 2011 varient would be much more powerfull than the 2005 verison in the 360.

I read this as allowing for more destructable enviroments also. If you shoot a rocket at a wall the tessellator should be able to deform the wall and remove parts of it that aren't there anymore.

currentauth.png

vs
tessauth.png


http://www.anandtech.com/video/showdoc.aspx?i=3507&p=8
 
Its a shame the industry hasn't thrown its weight behind tessellation already considering the architecture we've been seeing since Xbox 360 in 2005.
 
Its a shame the industry hasn't thrown its weight behind tessellation already considering the architecture we've been seeing since Xbox 360 in 2005.

I allways thought that the 360 had a very basic one like the r600. I would also bet that the lack of exclusive engines for the 360 si one of the reasons its gone unused. I think viva pinata is one of the titles that uses the feature and they use it for the grass if i'm nto mistaken
 
Good tools are really needed, but they will vary a lot depending on the HW.

They can but this is a clear case of better to have than have not. Tools, libraries, examples, infrastructure (farms,mo-cap,network,servers,etc), standards, delayed and immediate support paths, etc all need to be addressed. The hardware is only one piece of the puzzle and there is great deal of technology that is just as critically important. This gen was just a sampling of many things that must mature and quickly or there will be a rather nasty price to pay.
 
Its a shame the industry hasn't thrown its weight behind tessellation already considering the architecture we've been seeing since Xbox 360 in 2005.

It is a tall task when most tool chains aren't close to ready for it and the common skill set of artists and programmers alike does not map well to using it.
 
this happens, every few years

... tesselation ... is gonna happen now ... its the next new big thing ...
but every single time it doesnt occur, ppl just are not interested,
you would of thought ppl would of learnt from history (oh wait maybe not..)

like those cults proclaiming the end of the world
 
That would be a real minor update to the CPU, no? specially given how weak each core is, probably many todays dual cores (at least if costum) would beat it...
and would keep all of the problems.
Without some rework it's more than likely. I remember Capcom (early on) stating that they were able to achieve (with the xenon) the same level of performance as a dual core P4.
The P4 was crushed per cycle per Athlon who got crushed by Intel's core architecture the upcoming nehalem will raise the bar again( likely not by the same amount nor on every workload).
I'm just trying to enphasis your "how weak the cores are" ;) but it's also to show that there is room for improvement. I also think that MS may not need (and most likely could reach) the level of performances Intel or AMD will be able to deliver by this time.
Ms needs something cheaper/tinier than what intel or AMD has to offer and with lower power requirements. Xenon is weak in so many in regard to top of the line x86 CPU and can be improved in so many ways that I think that MS/IBM have room to make substantial gains in perfs without breaking their power/die space budget.
How much? I think some members may come with proper estimates, by watching at AMD/Intel progress (witch came at a cost) I would say that MS/IBM should aim at an overall 30% improvement per cyle.
Overall that wouldn't be a great jump in power but basically really good serial performances as found in the PC realm is likely out of reach of what MS can afford.

Maybe MS on top of offloading a lot of calculations to the gpu could consider the use fixed function untis enslaved to the cpu cores to free CPU resources. I think especcially of units handling decompression, maybe network acceleration too.
I remember seeing average cpu utilisation for PGR3 and basically more than one core was dedicated to decompression speak about a waste of silicon.
Using ~30millions transistors running 3.2GHz to achieve what a dsp would handle faster while being way tinier... not too mention power efficience.
Shortly
 
Using ~30millions transistors running 3.2GHz to achieve what a dsp would handle faster while being way tinier... not too mention power efficience.
Even with state of the art efficiency per transistor per cycle for their superscalar cores this won't stop being true. This is more an argument for spending more area on SPUs/Larrabee than anything else.
 
It is a tall task when most tool chains aren't close to ready for it and the common skill set of artists and programmers alike does not map well to using it.
I allways thought that the 360 had a very basic one like the r600. I would also bet that the lack of exclusive engines for the 360 si one of the reasons its gone unused.
Yes. Unfortunately technologies and tools haven't been built around it. As 360 is a closed platform there was a lot of potential there from the early days, but we haven't seen many adaptations several years later. As mentioned, exclusive engines are lacking for the console. Even Alan Wake (also for PC) which will agressively use tessellation appears to be doing so on the CPU and I don't know if it would be practical development to support it in hardware. PS3 as well with the Cell processor has a lot of potential for tessellation. And of course DX10 GPUs with the geometry shader, but that architecture still isn't the baseline for development in PC titles ^.^
I think viva pinata is one of the titles that uses the feature and they use it for the grass if i'm nto mistaken
Yes. In fact I believe most of the terrain makes use of h/w tessellation. Impressive but one in a million. I hoped for Rare to come up with a bigger more ambitious game using that sort of rendering techique, like a new Perfect Dark or something.
 
I'm c orrect in assume that every gpu from ati has this feature in them since the r600 correct ?

Perhaps it be in ati's best intrest to get a small team together and start going to all the diffrent companys and adding the tesselator support to them. It seems to me that the only thing that would change is that instead of using the lod model it would dynamicly create its own best case one on the fly.

That may help them alot in current games.


Hopefully next gen though with it being better supported in dx11 we get to see alot of it in th next round of consoles
 
Status
Not open for further replies.
Back
Top