Predict: The Next Generation Console Tech

Status
Not open for further replies.
So, imagine launching against a console priced at 250 with a 15 million unit head start, and a good share of third party games being developed for it. Please explain exactly what graphical feat will be available for consoles in 2014 that will wow the mainstream audiences to choose a new Sony or MS console over the WiiU? Because for all the tech that is inside the PS3, its still last place in the console war even with its recognizable name.

Massively better graphics will easily do it. Lets just take RAM as a proxy for graphics, Wii U likely sits at 512MB-1GB. PS4/XB3 will likely sit at 4-8 GB.

Very quickly software development will tend to shift towards those more powerful consoles, they will be PS4/XB3 ports, and the Wii U will be left out. Say you have two machines with 4GB RAM and the Wii U with 1GB. It will be left out and not receive ports from Ubisoft, EA, Activision. Just exactly where the Wii sits today. All your arguments would have been just as relevant last gen (except that the Wii likely had an initial burst of popularity that dwarfs anything the Wii U will see) see how that worked out? It could have been stated like this in 2006:
Please explain exactly what graphical feat will be available for consoles in 2009 that will wow the mainstream audiences to choose a new Sony or MS console over the Wii

Why do you think Nintendo fans are so obsessed with hoping Wii U has some beefy hardware under the hood right now?

Because of all the tech in the PS3 and 360, they combined have outsold Wii and sit in a strong position in year 5 while the Wii fades.
 
Massively better graphics will easily do it. Lets just take RAM as a proxy for graphics, Wii U likely sits at 512MB-1GB. PS4/XB3 will likely sit at 4-8 GB.

Very quickly software development will tend to shift towards those more powerful consoles, they will be PS4/XB3 ports, and the Wii U will be left out. Say you have two machines with 4GB RAM and the Wii U with 1GB. It will be left out and not receive ports from Ubisoft, EA, Activision. Just exactly where the Wii sits today. All your arguments would have been just as relevant last gen (except that the Wii likely had an initial burst of popularity that dwarfs anything the Wii U will see) see how that worked out? It could have been stated like this in 2006:

Why do you think Nintendo fans are so obsessed with hoping Wii U has some beefy hardware under the hood right now?

Because of all the tech in the PS3 and 360, they combined have outsold Wii and sit in a strong position in year 5 while the Wii fades.

I'll still be more happy doing a nice looking port from PS4/XB3 to WiiU than PS3/XB2 to Wii
I dont want a bottom of the range 4000 derived chip though, mid to high range would be fine.
DX11 class would be great but nobody seems to think this is going to happen
 
I wonder how much better graphics you need to be happy people.
I'm rather happy with BF3 quality, I'm not sure I care much past that point.
(I'd rather go for something else than photorealistic, it's boring, I see that all the time in real :p)
Agree, I'm somehow out of the mainstream market when it comes to gaming taste. I'm fond of old school RPG (Baldur gates like), hack n slash. I was also a huge driving sim and fighting games fan but aging I lost interest. I can't play FPS it just feels wrong to me I favor TPS.
Overall I'm not into realism and photorealistic graphics at all, neither I'm into anything that resemble real world wars in gaming, I want alternate reality. For the kind of games I like I'm not sure the technology is going in the right direction, I don't care for super detailed characters (/etc) but I want more consistent game world. I would happily accept downgrade in graphical fidelity while getting rid of a lot of the artifact and aliasing, I want more believable effect for things like fire, dust, smoke, way better foliage etc. Overall more geometry.

I also want better collisions system, AI, physics animations.
A lot of this can be done with polygons but I believe that using other rendering techniques would allow to achieve this reasonable "specs-ed" systems.

I don't think you'll see the same visual leap that you have in previous generations.
I'm sure to tech savvy people looking for it it'll be there, but to the average game buyer who doesn't go around A/Bing everything I'm not so sure.
To my mind the big leap in this generation was HD resolutions. I don't think that 3D is mature enough to have the same effect, and I do think it add significantly to games when it's well done.

I think it's likely that MS and Sony will try and replicate Wii's success, in so far as they will try and have a hook that isn't just "10x" faster, and try and pull in some of the casual crowd that spent $300 on a Wii. I'm not sure they will be successful doing it, and not sure those people would buy another Wii.
Well they will have to sell something that include a pad and kinect at least so you could be right. I hope they won't be too conservative. I've been advocating a single chip/apu system for a while (even though lately it turned more into assumed unreasonable proposal) but still want something competitive.
 
Last edited by a moderator:
Massively better graphics will easily do it. Lets just take RAM as a proxy for graphics, Wii U likely sits at 512MB-1GB. PS4/XB3 will likely sit at 4-8 GB.

What is massively better gfx than PS360?

Personaly if they gave me Avatar like gfx, that still would be a much smaller jump than Halo 2 to crysis/crysis 2. Much less a Killzone to killzone 3 jump.

Sure we all like better HW, none state otherwise, but it comes at a cost and some (many, it would seem?) dont care about it enough to pay a higher cost. I dont.

Very quickly software development will tend to shift towards those more powerful consoles, they will be PS4/XB3 ports, and the Wii U will be left out. Say you have two machines with 4GB RAM and the Wii U with 1GB. It will be left out and not receive ports from Ubisoft, EA, Activision. Just exactly where the Wii sits today. All your arguments would have been just as relevant last gen (except that the Wii likely had an initial burst of popularity that dwarfs anything the Wii U will see) see how that worked out? It could have been stated like this in 2006:


This gen is diferent from the last, we actually got beneficts from HW, and we all knew we would get them, next gen, few think there can actually be that many beneficts.

Plus why are you so sure that software development will tend to shift towards those more powerful consoles, are you sure they will bring so many beneficts to everyone that everyone will want to play on them?

Are you sure that all the 3º partys will want to invest and risk even more to do a higher gfx only game?


The only way that they have to garante that is if every company stop doing PS360/Wii U games and all the companys will go after the small instaled maket of the next gen, not very secure/healty IMO


There is a good reason why PSN/XBL games are so popular and it is not gfx.


Why do you think Nintendo fans are so obsessed with hoping Wii U has some beefy hardware under the hood right now?

We most have been reading diferent posts...

But sure bring on beter HW.

Because of all the tech in the PS3 and 360, they combined have outsold Wii and sit in a strong position in year 5 while the Wii fades.

I am quite sure they would prefer to have sold as much as Wii, alone :D.



I have no problem thinking that PS4/XB3 will give us quite a nice update in the HW department, but it will not be a leading design/price feature. And it will be for a lower price point too without loosing money (at least spec wise).

I also dont think that hogh speced gfx engines will be a essential things in games, but more like good special fx in a (no sic fic) movie, the movie is much more than its special fxs.


I think that gfx will improve overtime, just at a slower pace and with a diferent focus...
 
Last edited by a moderator:
I dont want a bottom of the range 4000 derived chip though, mid to high range would be fine.
DX11 class would be great but nobody seems to think this is going to happen

I don't know. What are the chances that they will use AMD's Graphics Core Next?
I know all the speculation was about a 4000 derived chip, but as we already learned these were really early dev kits.
And what would take so long to customize a 2008 design so they wouldn't be ready by now? I mean it seems like they really don't have a clue how the final product will perform judging by interviews we heard during E3.
I could only think of a 28nm chip (Southern Island just taped out), but if they would do so why not use a 28nm architecture instead of a die shrinked 2008 architecture when the newer features all the things the next Xbox and PS4 will support? They could get rid of things like OpenCL to fit the transistor budget and use only a fraction of the processors that the other 2 will have, but they would never be in the position of the Wii that the engines couldn't support it.
Also lots of RAM would be nice, but I guess more than 1,5GB won't happen.

pc999 said:
A 4600-4800, will be quite nice, wouldnt mind more but that is enough, just make it cheap.

I can only think of a 46xx / 55xx / 65xx type card (320-480 shader units / 16-32 texture units / 8 rops), everything above would probably already be to power hungry for such a small box.
 
Sorry, but that is utter nonsense.


why so? old halo is barely a step above quake 2 (blocky, still symbolic environment, low res textures), crysis gives you big forests with actual rocks, leaves, dirt, barrels, makeshift houses, boats etc.

with a much more powerful console or computer, improvements will be more incremental. higher res content, some stuff that were missing (bugs, mushrooms, animal life, more plant variety?), some better lighting. but you're still walking the same pseudo-realistic forest.

the same old problems are there : why can't I climb there? why can't I shop that rope and break it? why can't I eat that mushroom, or walking over it doesn't crush it.
 
why so? old halo is barely a step above quake 2 (blocky, still symbolic environment, low res textures), crysis gives you big forests with actual rocks, leaves, dirt, barrels, makeshift houses, boats etc.

with a much more powerful console or computer, improvements will be more incremental. higher res content, some stuff that were missing (bugs, mushrooms, animal life, more plant variety?), some better lighting. but you're still walking the same pseudo-realistic forest.

the same old problems are there : why can't I climb there? why can't I shop that rope and break it? why can't I eat that mushroom, or walking over it doesn't crush it.

The graphics fidelity of Avatar is miles away from current console technology. Miles away. There's a far greater leap from Crysis to Avatar than from Halo to Crysis. Have you seen how much computing power it takes to render a single frame in Avatar?
 
I don't know. What are the chances that they will use AMD's Graphics Core Next?
I know all the speculation was about a 4000 derived chip, but as we already learned these were really early dev kits.
And what would take so long to customize a 2008 design so they wouldn't be ready by now? I mean it seems like they really don't have a clue how the final product will perform judging by interviews we heard during E3.
I could only think of a 28nm chip (Southern Island just taped out), but if they would do so why not use a 28nm architecture instead of a die shrinked 2008 architecture when the newer features all the things the next Xbox and PS4 will support? They could get rid of things like OpenCL to fit the transistor budget and use only a fraction of the processors that the other 2 will have, but they would never be in the position of the Wii that the engines couldn't support it.
Also lots of RAM would be nice, but I guess more than 1,5GB won't happen.



I can only think of a 46xx / 55xx / 65xx type card (320-480 shader units / 16-32 texture units / 8 rops), everything above would probably already be to power hungry for such a small box.


I also find strange if it based on a 4x00, they should have pretty good expectation on what they could do, but sometimes it seems like they just have some pointers to work with.

They have probably something really custom, or at least closer to a 6x00 (7x00?)

Anyway the raw power of a 4800 would be really good, just work on the price/dev tools/online/... above that.


The graphics fidelity of Avatar is miles away from current console technology. Miles away. There's a far greater leap from Crysis to Avatar than from Halo to Crysis. Have you seen how much computing power it takes to render a single frame in Avatar?

From a subjective POV view all that power dont matter, at least for a game gfx.

Higher detail is harder to notice, specially in a firefight ;). Bigger scenarios is beyond the scope of view.

It hardly would affect gameplay at all.

Anyway gfx will keep getting better, but will not be the focus of development anymore IMO.
 
Last edited by a moderator:
The graphics fidelity of Avatar is miles away from current console technology. Miles away. There's a far greater leap from Crysis to Avatar than from Halo to Crysis. Have you seen how much computing power it takes to render a single frame in Avatar?

sure.
I'll have to admit I've only seen a few minutes from the movie, and it really felt like "Crysis : the movie" or "3dMark : the movie".
just showing off stuff (wind from helicopter or ship things blowing on the vegetation). a real tech demo, I wouldn't be surprised if there are opengl teapots and perfect reflective marbles on Avatar's planet :)
 
I'm pretty sure it'll be a good long while before we even see another movie that surpasses the quality of the effects in Avatar. Hard to improve on something that looks already real. Games aren't anywhere near that level.

Still, in terms of visual quality I actually agree that the gap between something like Halo and Crysis is far more pronounced than the difference between Crysis and Avatar. The gap in technology is a different matter entirely.
 

Awesome tech demo, didnt know about this one, thanks.

Ps. I disagree with current generation -> avatar jump being lesser than Halo 2 -> Crysis. When i was at the cinema i was totally blown away by quality of rendering, it was star wars like jump for me. There were so many details and everything was realistically lit and simulated, Pandora just looked real and believable and of course completely different from earth. For me, it was great achievement and next-gen of cg rendering.
 
Last edited by a moderator:
The graphics fidelity of Avatar is miles away from current console technology. Miles away. There's a far greater leap from Crysis to Avatar than from Halo to Crysis. Have you seen how much computing power it takes to render a single frame in Avatar?

Isn't that sort of the point though? That improving the graphics rendering requires superlinearly greater computational effort for ever finer nuances? That's exactly what those speaking of diminishing returns mean - that the factor of two or so that a generational step in lithography can optimally provide just doesn't buy all that much in terms of obvious improvement anymore.
 
I don't think you'll see the same visual leap that you have in previous generations.
I'm sure to tech savvy people looking for it it'll be there, but to the average game buyer who doesn't go around A/Bing everything I'm not so sure.
To my mind the big leap in this generation was HD resolutions. I don't think that 3D is mature enough to have the same effect, and I do think it add significantly to games when it's well done.

For some years the mantra has been: "We don't need more pixels, we need better pixels". And we have gotten better pixels. Maybe it is time to take a step back and re-evaluate the situation. Maybe at this point in time, more pixels bring greater overall sense of improvement than better pixels.

I found the interview with Carmack interesting because he made exactly such a point - that the image quality gain from a factor of two-three more work per pixel, couldn't compete with the improvements that increased frame rate provided.

Frame rate: 30 fps comes from the old NTSC TV standard (60Hz interlaced) that originated in the AC power line frequency in the US. This was in turn inspired by the 24fps of celluloid cinema film, a standard dictated by mechanical limitations of almost a century ago. The low frame rate is a problem that can be worked around (to some extent) in movie production, but is much more obvious in interactive gaming.
Doubling the frame rate requires twice the processing power but otherwise no new technology.

Resolution is another factor that cannot directly be traded against increased per pixel quality. To see this, imagine that you reduce the resolution of an image to a single, perfectly rendered, pixel... Resolution can be regarded as information density (as opposed to information quality). Rendering in 720p (or below) might have been a reasonable tradeoff when many only had "HD-Ready" TVs or even SD and 1024x768 was still rather common on computer monitors. In the 2014-2020 time span this may not be the case anymore. Going to 1080p as standard rendering resolution would cost roughly a factor of two, again without requiring any otherwise new technology.

3D is my last example where a factor of two in the number of rendered pixels brings more immersion in the shape of a whole new dimension. :) Stereoscopic 3D, done well enough, really pays for itself in terms of immersion per render effort. But there are other issues involved with its uptake, issues that may or may not be resolved for the mainstream. It is, however, an example of where we can achieve a strong effect without being more sophisticated in how we render individual pixels. We just need more of them.

From a rendering technology point of view, all of the above is terribly boring. All it requires is - more pixels/s. So I can understand if graphics technology folks direct their attention elsewhere. But maybe we're at a point now where rendering more pixels may actually be a good idea in terms of perceivable improvement/watt or dollar.
 
...

From a rendering technology point of view, all of the above is terribly boring. All it requires is - more pixels/s. So I can understand if graphics technology folks direct their attention elsewhere. But maybe we're at a point now where rendering more pixels may actually be a good idea in terms of perceivable improvement/watt or dollar.

Interesting point, overall...
 
I was looking at tellys in a shop the other day. They were playing some CGI film about a surfing penguin or something. From the right distance the 1024 x 768 plasmas looked better than the full 1080 LCDs of the same price.

Higher resolution helps once you get close enough but better pixels make stuff look better from any distance.

Anecdote time: a chum of mine thought CoD:BO was running at a higher resolution than Halo 3. A little sub-pixel AA can go a long way. Please, god, don't replace sub pixel AA with post process filters.
 
I'd be very pleased if Wii U had a custom RV770 / Radeon 4850 with embedded memory. Even a custom RV740 would be more than decent. As long as the Wii U GPU has 16 ROPs. Cutting down to 8 ROPs ala RSX would hurt the chances for plenty of native 1080p games and good framerates.
 
I was looking at tellys in a shop the other day. They were playing some CGI film about a surfing penguin or something. From the right distance the 1024 x 768 plasmas looked better than the full 1080 LCDs of the same price.

Higher resolution helps once you get close enough but better pixels make stuff look better from any distance.

Anecdote time: a chum of mine thought CoD:BO was running at a higher resolution than Halo 3. A little sub-pixel AA can go a long way. Please, god, don't replace sub pixel AA with post process filters.

Halleluja!
To both points. :)
Disregarding for the moment the trend towards mobile, the trend in the home is clearly towards the displays getting larger, and since rooms don't grow with them, that the angle of view covered by the screens is increasing. So resolutions need to increase to fill that view with information (and to reduce the visibility of various pixel-level artefacts).
Usage patterns determine needs, clearly, but the trends are obvious. As far as TV resolutions go, we'll probably be stable at 1080p for a fair amount of time - my cineast friends are already complaining though, and want digital cinema distribution quality.
 
It is, however, an example of where we can achieve a strong effect without being more sophisticated in how we render individual pixels. We just need more of them.

Switching basic blinn phong shading model to more realistic and rich shading models like Oren-nayar+cook torrens will require x3 or x4 time the shading cost.
And that 's a step that need to be made, IMO.
That and 1080p.
This is my base for the next generation, i'd say.
 
Status
Not open for further replies.
Back
Top