Predict: The Next Generation Console Tech

Status
Not open for further replies.
Just running 1080p with 8xAF and 4xAA, cleaning up shadows (higher resolution and better filtering) and increased lighting percision (evolutionary stuff) will go a long way to make this gen assets look MUCH better. But do we need 4xAA at 1080p? Will 2xAA at 1080p be "good enough" for most? Are hardware makers better at focusing on real short comings (e.g. the lack of grass in games! :oops: )
Good point!

Another major short-coming in my book is the lack of life-like animations.

Time and again Crysis graphics is being brought up as the pinnacle of real world like graphics and each time I think for myself that the sense of reality brought by the polished graphics of Crysis is killed every time a character with jerky robotlike movements and a dead stare appears on the screen.

Improvements are made. Killzone 2 brought some weight to the characters (even the multi-player ones), but the dead stare is still there. I am looking forward to see what Heavy Rain and Alan Wake bring to the table. But hell yeah, there are stuff that should be prioritezed before 1080p with 8xAF and 4xAA, a lot of stuff that probably can be fixed by software on the existing hardware.
 
Carmack talks some about next gen in the third video here

http://www.cdaction.pl/news-7673/jo...o-materialu-prosto-ze-studia-id-software.html


-Thinks at least one next gen console might not have an optical drive, download only
-As devs, they want the gen to go on
-At least one more gen like this one, but we are approaching fundamental Moore's law type limits.
-Talks about cloud computing/Onlive type stuff. Seems to be promising.
-Kind of funny, mentions ID even has contingency plans in case somebody "jumps the gun" and launches a new console unexpectedly.
-Mentions 2GB as a next gen RAM figure, not sure if that means anything concrete, marvels at the power in our "game" systems.
-Talks about the possibility with Doom 4, if it slips or a next gen console comes out, a contingency plan of doing it cross-generational, with different data sets but the same engine.
 
Last edited by a moderator:
It will also be interesting to see how much dev budgets increase. e.g. Epic talks about scenes with 200M polygons and how they use this for their normal maps. While the tools may change, if companies do get a lot faster hardware and can utilize better techniques (e.g. displacement maps or exotics like voxels) why would that scene explode to 4B polys? I am not sure it has to. New render techniques that improve IQ (shadowing, shaders, lighting, etc) with the same 200M art source will look night and day.

Just running 1080p with 8xAF and 4xAA, cleaning up shadows (higher resolution and better filtering) and increased lighting percision (evolutionary stuff) will go a long way to make this gen assets look MUCH better. But do we need 4xAA at 1080p? Will 2xAA at 1080p be "good enough" for most? Are hardware makers better at focusing on real short comings (e.g. the lack of grass in games! :oops: )
?

I can really get behind this point. The base assets used in todays games are actually really good imo, they don't necassirily need an order of magnitude increase in complexity if you ask me. Some PC versions of multiplatform titles look literally night and day better yet the assets are the same. Getting rid of all the ugly artefacts still present in console games (low rendering resolutions, low quality shadowing, harsh lod settings, lack of decent aa solutions, pretty much non existant anistorpic filtering, low precision lighting models, tearing, sub 30fps framerates, that sort of thing) can have such a huge impact on the visual quality and yet it won't have any impact on the development budget.

The mentality of pushing consoles to breaking point and pushing effects that they really aren't suited to at the consequence of basic stuff like framerate and image quality really needs to change next generation. Otherwise budgets are going to continue to rise at unsustainable rates yet the rendering quality will still be a decade behind PC standards. Its just not a good fit in my eyes.
 
I believe that next-gen, console won't have the burden again of increased resolution, they can safely target 720p (as most HDTV are 1366x768, and the most common res still is 480i/576i).
The huge bandwith and power increases from a GPU similar to at least a RV740 or RV830 would then take care of the IQ issues.

I liked Dreamcast much for its IQ. Good looking 640x480 at the time, with 60fps games, a nice 8MB of video memory, and an included RGB cable (for the european or french market). It looked better than the PS2, and miles above the N64, but being somewhat half-way between the two gens.
 
I believe that next-gen, console won't have the burden again of increased resolution, they can safely target 720p (as most HDTV are 1366x768, and the most common res still is 480i/576i).
The huge bandwith and power increases from a GPU similar to at least a RV740 or RV830 would then take care of the IQ issues.

I liked Dreamcast much for its IQ. Good looking 640x480 at the time, with 60fps games, a nice 8MB of video memory, and an included RGB cable (for the european or french market). It looked better than the PS2, and miles above the N64, but being somewhat half-way between the two gens.

The Dreamcast's VGA output was really superb for its era, the PS2 in comparison was a major let down considering the Dreamcast came two years earlier and really demolished it in terms of IQ.

If the next generation of consoles settle for 720p I'll be very disappointed tbh, everything seems to be headed towards 1080p so if a console launched in say 2012 can't hit that it'd be rather dissapointing.

Something like 1280x1080 for easy scaling to 1080p (like GT5 which looks really nice imo) as the base resolution would be a decent compromise, as once you start adding 4xaa and 8xaf to that you'll have some very nice image quality on 1080p TVs.
 
I am in the camp voting for a conservative approach to resolution... To my eyes, AA & AF are better options than exaggerating the problem by increasing the resolution requirements... Every time is see the terrible AF in modern consoles I get sick to my stomach... They should've stuck with 480P if they couldn't get these consoles to render with at least 4x AF... Resolutions will keep going up for both fidelity and the upcoming 3DTV transition (don't laugh, this is the last TV generation that the idea will demarcate a folly of any sort... next wave of 3DTV's will be commoditized...) console makers need to concentrate on what looks good rather than just trying to meet a paper specification... I just wish that the end of specialized ASIC's hadn't been ushered in by this Generation. I would've loved seeing the consoles outpacing PC's with new hard-wired techniques rather than only having "Close to Metal, Closed System" as an advantage... Maybe it wouldn't have been as much of a win, but just imagine hardware specific for Spherical Harmonics or a PPU, etc ... If your hardware is fixed you should be creative... other than sheer generational improvement, these new consoles... meh...
 
I would've loved seeing the consoles outpacing PC's with new hard-wired techniques rather than only having "Close to Metal, Closed System" as an advantage...
What exactly could be put into an ASIC to give a worthwhile advantage? Seems to me everything important especially going forwards requires both buckets of processing power and programmability. Whatever silicon would be spent on hardwired features would have been lost in terms of programmability and I doubt to any particular performance gains. eg. A PPU in XB360 would have meant less CPU or GPU silicon, or a vastly more expensive machine, and the gains in physics, let's say Sacred 2 got the leaf and particle effects, would have been balanced by a loss in graphics.

Perhaps the biggest limiting factor is developer ease. the days of developers spending time to exploit a system's quirks are over. PS2 only scraped through last gen by being the runaway success in sales. GC's hardware wasn't well tapped. XB did okay in terms of obtained performance because it was straight forward to develop for. There's no future for ASICs when massive parallel programmability offers cost-effective flexibility.
 
I guess you're right, the end of the special ASIC is a foregone conclusion. It just used to be interesting to speculate about new fixed-function features like SNES's "Mode-7". Now even GPU talk has been drained of the intrigue of "How many Vertex Units verses How Many Pixel Shaders" because of unification... The last remnant of that type of discussion is the FPU vs TMU ...or GP-GPU fragment scheduling vs traditional fragment scheduling... I guess the console folks still have specialized interconnects and memory management to ponder... it's become so much more bland... yet the increase in brute processing is still a generally exciting thing... I guess that making it easier on Dev's and having more (and better) games is worth more than having something juicy to discuss in these forums... ;)
 
Shifty Geezer said:
Perhaps the biggest limiting factor is developer ease. the days of developers spending time to exploit a system's quirks are over.
One could argue that NDS and Wii are an example that system's quirks and exploiting them is more prominent to success then ever. They just aren't quirks to extract more instructions/cycle anymore.
As long as those quirks are given a marketing identity anyhow.

In all fairness I think that this gen a GPU of same performance class we have but designed for deferred shading could have worked better(in some ways) then the generalist PC offshots we got. But yea it gets more difficult to stand out in terms of specific hw functionality as we go further.
 
One could argue that NDS and Wii are an example that system's quirks and exploiting them is more prominent to success then ever. They just aren't quirks to extract more instructions/cycle anymore.
As long as those quirks are given a marketing identity anyhow.
Sure, the USP is a big thing. I just don't think ASICs are going to offer a USP at any point.

Just theoretically, what are upcoming techniques that could have been used? The deferred rendering is a good example. Raytracing was suggested but we know it's useless. A physics unit is too limited; we're better off with the flexibility. There are some techniques that aren't making into games. A voxel based system is an option. There's silhouette mapping for smooth contours. I don't know if a hardware optimised AO engine would be worthwhile. But we've seen algorithm developments there offering better results or performance.
 
I really can't see cloud computing playing a huge roll next gen. Maybe cloud storage, and more persistent world features, but I can't see much computational work being offloaded to the cloud. Network latency is still too high. Maybe if there is a great redesign of the router, like the flow router I was reading about yesterday, things can be improved, but that won't happen any time soon. Maybe we'll see things like streaming content into games, so a "level" doesn't load completely off a disc, to get around storage limitations and to allow game content to change dynamically from the cloud. Guild Wars did something like that, but the levels were still cached locally.
 
I like the idea of cloud storage. This way all my saved game data and profile info is stored online (and can be cached offline also). This would make moving between systems a lot easier and going to friend's house and logging in as yourself. Not to mention, hardware failures won't effect your profile.
 
I really can't see cloud computing playing a huge roll next gen. Maybe cloud storage, and more persistent world features, but I can't see much computational work being offloaded to the cloud. Network latency is still too high. Maybe if there is a great redesign of the router, like the flow router I was reading about yesterday, things can be improved, but that won't happen any time soon. Maybe we'll see things like streaming content into games, so a "level" doesn't load completely off a disc, to get around storage limitations and to allow game content to change dynamically from the cloud. Guild Wars did something like that, but the levels were still cached locally.

I can certainly see a big draw to cloud storage. Having gamestates always persistently stored ala. MMORPGs.

Such that whenever you turn off your console/exit the game, your character progress is automatically stored in the cloud and is accessible from anywhere in the world.

Go on vacation? Hotel has next gen game console available for rent? Hook it up and resume your game from where you left off.

Console die on you? No problem. Everything is stored in the cloud.

Likewise with being able to have your games stored in the cloud (digital distribution). Again, go anywhere in the world and voila, your game is available to you. Go to a friends house, your game is available to you. Granted this aspect would require some good bandwidth.

Regards,
SB
 
Steam uses cloud storage for the settings in Left 4 Dead.

I'd really love to see this, as I've lost many savegames etc. due to faulty memory cards and whatnot. Sonys way of dealing with this (i.e. 5 lives per account) is nice too, although not perfect either.
 
Who would maintain the cloud storage for the game?
The publisher, the developer, some third party?

That's an ongoing expense with no end in sight (or they can stop and possibly strand the game) for any non-subscription sales model.
 
Steam uses cloud storage for the settings in Left 4 Dead.

I'd really love to see this, as I've lost many savegames etc. due to faulty memory cards and whatnot. Sonys way of dealing with this (i.e. 5 lives per account) is nice too, although not perfect either.

I was going to mention this, apparently they'll be rolling it out for game saves in the future as well. It would be a feature I'd love to have, I guess the console manufacturers will be watching Valve with a close eye to see how well they manage with it.
 
Status
Not open for further replies.
Back
Top