Are current gen hi-def consoles near to their achievable limit

Sure there are new techniques developed, but these days I think that has less to do with hardware and more to do with algorithmic development.
Code quality counts for a lot. Learning how to write efficient code for a piece of hardware, especially one like SPU code without the execution aids of x86 where you have to make micro level choices that can severly cripple the speed of your processor, means a big difference in how much work you can do. That surely has to be an issue of developers learning to understand and design for the systems, rather than developer tools magically optimising their source code. If a new algorithm paper is released with generic demo C code, one developer copy/pasting that into their game won't get as much from their engine as another developer understanding the algorithm and crafting it from the ground up for the target machine using every efficiency they can think of.

I imagine that's the main reason for games improving - developer know-how in targeting the machines. Once that know-how has reached a decent level of efficiency, the hardware is being exploited and it's then just a matter of picking which features and assets to make it look good. Until then, no amount of fancy lighting techniques and carefully crafted artwork will make a poorly written game deliver what it's fully capable of.
 
My question is, do you think that developers are pushing near the achievable level of fidelity on current gen consoles or there is still a lot more juice to be gotten out of them. LA Noire, seems quite a huge leap from the very first-gen titles for example.

It's just my opinion, but LA Noire doesn't do anything remarkable in terms of utilizing the hardware. They're basically streaming the face textures from disc as far as I know, but everything else in the game engine is quite mediocre. The interesting part of the technology lies in the capture process, in creating the assets - displaying them has nothing particularly interesting or impressive going on.

The PS3's gpu is a basically a 7800GTX, which is ancient by today's standards, yet being a closed box environment developers can really push the hardware to its limits, but do you think that they are close to that limit yet?

Performance limits are almost completely exhausted IMHO. What's left to explore is how to do more with the same poly and texture budgets; and fields of technology that require more complexity and work, instead of more performance or memory.
 
I mostly disagree with the forums opinions of technical superiority any way, pretty graphics are much more a function of art direction and good choices during asset creation than they are a function of technical merit.

Finally someone else agrees :)
 
I think it's a meaningless question.
Developers certainly learn from previous experience with a platform, and 2nd titles are often a jump over 1st titles, though in practice that's as much a function of tight launch windows and lat hardware visibility as it is anything else.

I mostly disagree with the forums opinions of technical superiority any way, pretty graphics are much more a function of art direction and good choices during asset creation than they are a function of technical merit.
When I did my first game port circa 1989 I realized how little good technology or software engineering had to do with pretty games.

Sure there are new techniques developed, but these days I think that has less to do with hardware and more to do with algorithmic development.

I think what does improve through console life cycles is the tools, and that probably has as much effect on quality as anything else. Ask any AAA product developer how many lines of code in the tools they use to build a game and how many in the actual product.
Well, explain that to some staunch fans and they aren't going to listen. You're exactly right but it's almost always console users and not developers who do the policing here.

I've found that a few fans really don't care if a person with knowledge or a weighty opinion contradicts them. Uh, maybe because they are human beings with emotions and these things are kind of important to them, or they are PR staff without actually being paid, or they have to justify their investments, and the least they could do is listen instead of telling others to be ashamed because their opinions apparently are wrong (meaning you don't like a game they love)? And they don't get it.

On a side note: I visit other gaming forums too, though I prefer to read than write so I don't participate there and I think this one is more progressive in that regard, unsurprisingly.

I just went in for some reading and ended up thinking about joining some of those forums where it's a battlefield and I have been basically in full LURKER MODE although trying to come out of it, then I lose my nerve.

I think it's no use trying to debate with people that don't have a common ground with you, so I only join forums where I can be calm and not argue over silly things. It was bearable between the age of 20 to 27 but nowadays I don't like the hassle it entails.

Other than that, I agree with Rangers that I am seeing signs of weakening. I thought current consoles were all-powerful back in 2005. I still think it's great hardware taking into account they were created almost 6 years ago.

I can't honestly tell what the problem is, it might be easier for a developer to explain exactly why they can't progress anymore. But they are still able to run games in HD and make them look decent.

I wonder though why an "ancient" game like Half Life 2 runs at 30 fps when consoles are orders of magnitude more powerful than the PCs of the time.
 
Last edited by a moderator:
Don't make this an either/or ... the quality of games is generally a result of a successful marriage between art and technology, or all games would be board games (and even then technology comes in at the manufacturing level, making certain things affordable that previously perhaps weren't, like color print).

As always, sometimes less is more, but you will never have art in any game without any technology whatsoever. From there on, the range of possibilities for artists increase as technology enables more. It is of course absolutely true that given a good 2D artist, you can make a great looking 2D game on almost any platform that was released in the last 20 years. Even there though the available memory for storage and RAM are going to count for the scope of work possible.

Taking it from there, good luck to an artist who wants to create something that has a nice interplay of shadows and lights in an engine that only supports one light and shadow one object (with ugly, ragged shadows to boot). Sure, you can limit yourself to making something that still looks good, like some kind of cool near binary black and white stuff.

But to make a simple summary: you can't extoll the virtues of HDR for artists on the one hand, and the irrelevance of technological advancements on the other.

Also, I think if we're taking the OP's question at face value, surely we're discussing primarily algorithmic advancements rather than anything else because the hardware is fixed. I think that encoding HDR values in a different way, or inventing algorithms like MLAA, or new ways of dealing with shadows, deferred rendering to increase the amount of lights possible, all those things make important contributions.

I do by the way that tools are important and frequently overlooked. And yes, the value of hardware is often overrated and underused.

I still do also think that hardware is underused in terms of providing gameplay experiences that are not simply better graphics or sound. I think this is one of the reasons why the Wii was successful - new (motion) controllers give much more direct cause for new gameplay rather than more graphics and sound capabilities, which is clearly witnessed by current HD consoles having mostly games that are HD versions of last generation games, in general.
 
I guess we'll see with open world game Infamous 2 right around the corner. I believe they are shooting for 60fps at 720p. I think, as far a s clock cycles are concerned, PS3 has just reached it's limit with these batch of AAA 1st party games.
Where have you heard about Infamous 2 being 60fps?I seriously doubt they are shooting for 60fps in open world game.It certainly does not make much sense and since R* games usually run even at sub 30fps I can't imagine this running at twice higher frame rate.

On topic,I do believe the consoles are clearly reaching their limits mainly because RAM and GPUs.Now its all about better usage of tricks and tunning the tech that is important to developers vision and dumping the "unnecessary" ones.
 
Code quality counts for a lot. Learning how to write efficient code for a piece of hardware, especially one like SPU code without the execution aids of x86 where you have to make micro level choices that can severly cripple the speed of your processor, means a big difference in how much work you can do. That surely has to be an issue of developers learning to understand and design for the systems, rather than developer tools magically optimising their source code. If a new algorithm paper is released with generic demo C code, one developer copy/pasting that into their game won't get as much from their engine as another developer understanding the algorithm and crafting it from the ground up for the target machine using every efficiency they can think of.

I imagine that's the main reason for games improving - developer know-how in targeting the machines. Once that know-how has reached a decent level of efficiency, the hardware is being exploited and it's then just a matter of picking which features and assets to make it look good. Until then, no amount of fancy lighting techniques and carefully crafted artwork will make a poorly written game deliver what it's fully capable of.

I wrote a big response to it and lost it, so you get the short version.

Yes software quality is important, but any decent assembly programmer (which I'll admit is a rarity these days) should be able to push out quality SPU code for a given algoithm. It's not that hard. The hard part is optimizing the data flow, and that usually does improve certainly in the second release and incrementally there afterwards.

Good artwork can make borderline apalling tech look good. No ammount of stunning tech will make bad artwork look good.

If I had to choose between 50% more polygons on my characters or giving the artists a better workflow with more chance for iteration, today I take the second one because it yields better visual results. Obviously these are not usually exclusive choices.
 
I wrote a big response to it and lost it, so you get the short version.
Don't you just HATE that?!!

Yes software quality is important, but any decent assembly programmer (which I'll admit is a rarity these days) should be able to push out quality SPU code for a given algoithm. It's not that hard. The hard part is optimizing the data flow, and that usually does improve certainly in the second release and incrementally there afterwards...
I agree absolutely with the art being mostly what people regard as 'good tech', in that a good looking game is what people consider technically advanced; the OP even says as such. And perhaps the basic level of coding in games development is high, such that the performance loss due to bad choices or poor code isn't that great, and I'm holding a somewhat pessimistic view of how well console development is run based on standards in lots of industries that are slap-dash, slipshod, driven by lousy management chasing impossible targets etc. (eg. Dilbert). Still, that doesn't really contradict the OP - "My question is, do you think that developers are pushing near the achievable level of fidelity on current gen consoles or there is still a lot more juice to be gotten out of them." The "achievable fidelity" will be the most polys and textures and shaders at the highest resolutions, which is all about using the hardware efficiently. Whether such games are regarded as beauties or not is a parallel but different consideration.

"Do you think games will look better in the future?" is a different thread as I see it. To the first question (this thread) my answer is basically 'no'. Developers have a 'good' handle on the current hardware, and there isn't going to be a marked improvement in work they can do. Even if their code isn't very efficient, whatever causes that isn't going to change, so the current standards will remain. As you say, new techniques like MLAA and AO can be used to improve the look of games over their predecessors, and in some rare cases maybe that'll free up resources that can be used to increase fidelity, but overall I think what we have now is the limit, and it'll only be different choices in compromises that'll make a game look 'better' than others. As for "will games look better," that's possible with new techniques and workflows, but generally the artists aren't going to be getting better because, well, it's art! Every artist in the business is going to be good, and it'll be just be artistic choices and direction and workflow and management that determines if the end results work well or not. So they'll be dependent on new techniques, which are fairly rare and can't be relied upon. Often new techniques only become available when there is enough processing power to explore and exploit them.

So in 3 years time, I don't think games on PS360 will be notable advanced over current titles. The technology hasn't got much further that it can be pushed, and art is art and will always be subjective.

As a test case, we can review any franchise or series of games last gen and see how things improved. Take someone like Snowblinds Studios who I'm very familiar with. Their first PS2 game was quite remarkable in how it looked. Their second PS2 improved considerably, clearly using the ahrdware more efficiently to get more from it. Their third game, a sequel, wasn't noticeably better, nor was their 3rd full game, Justice League Heroes, which added tech but actually didn't look as pretty as CON. The second game was enough to get close to the perceivable limits of the hardware, such that whatever engine improvements they could make, the end results didn't look any better.
 
I agree absolutely with the art being mostly what people regard as 'good tech', in that a good looking game is what people consider technically advanced; the OP even says as such. And perhaps the basic level of coding in games development is high, such that the performance loss due to bad choices or poor code isn't that great, and I'm holding a somewhat pessimistic view of how well console development is run based on standards in lots of industries that are slap-dash, slipshod, driven by lousy management chasing impossible targets etc. (eg. Dilbert). Still, that doesn't really contradict the OP - "My question is, do you think that developers are pushing near the achievable level of fidelity on current gen consoles or there is still a lot more juice to be gotten out of them." The "achievable fidelity" will be the most polys and textures and shaders at the highest resolutions, which is all about using the hardware efficiently. Whether such games are regarded as beauties or not is a parallel but different consideration.


No because I think there are algorithimic leaps, and better compromises to be had. And that doesn't include scope for artistic choices like radical art styles.
I do think that we are unlikely to seee big leaps in general quality, but there will be standouts, and I think that's likely true if the lifetime of the PS3/360 was >10 years, though I think we'll see PS4/XBox whatever before that.

There is plenty of bad project management in games, and deadlines certainly impact quality, though honestly a lot of teams would never ship if they didn't have them.
Console games in particular have the advantage of being "disposable" software, it lets you be somewhat more pragmatic when necessary and when pushed against deadlines.

I know the guys who started Snowblind, and while Ezra is a very smart guy the real value in that Studio IMO is their art, they have 2 of the better artists I've ever worked with on staff.

As I mentioned earlier, you see a big leap from game 1 to game 2, it's a function of understanding what works both from an engineering standpoint and just as importantly from an art standpoint. You make different tradeoffs with the experience.

Very few things impress me technically, I'm old and cynical, I still think the original Grand Tourismo is up there, not for polygon counts or graphics, but because I know how hard it was to put that simulation (all be it a fairly poor one) on that box.
I've seen the animation code for Madden and I think it's impressive, though you'd be hard pushed to find a message board fan who would agree.
I worked on Spore, a game probably with more clever technical solutions to somewhat unique problems than any other I've seen (I can't take credit for any of them), but very few people understand which parts were technicaly complex and which weren't.

I just get somewhat irritated when people equate technical quality with graphical quality, while they are certainly not orthogonal, they are certainly not equivalent.
 
No because I think there are algorithimic leaps, and better compromises to be had. And that doesn't include scope for artistic choices like radical art styles.

That's something I agree wholeheartedly with, and tried to point out in a clumsy manner earlier.

I find it fascinating myself when a developer mentions they found a better way to do X feature at less resource cost with minor compromises to quality. Or the rare case where they can do Y feature better and faster due to a new algorithm, etc...

And I believe things like that will continue all the way up until active developement halts for X360/PS3.

Regards,
SB
 
I think even the first generation of games on the consoles maxed them out. I think consoles are maxed out. The only way we can see game visuals improve is my smart gamedesign rather than raw horsepower.
 
I believe the most significant reason for improved visual quality is the improved content creation tools.

Current generation consoles were the first ones to have (long) programmable shaders, and the first ones that made global scale dynamic per pixel lighting possible. Most developers completely rewrote their engines and tools for the new shader hardware.

Console developers didn't know to use shader architectures that well at the beginning, and weren't that familiar how to implement dynamic lighting efficiently (all kinds of deferred lighting models, and new shadow map sampling/partitioning techniques have been implemented since). The first set of content development tools weren't really that well planned for all the new possibilities. Now we have third generation console engines and content development tools designed solely to exploit all the new possibilities, and the difference is striking.

The shader hardware and dynamic lighting hasn't been the only radical changes recently. Newest console engines are the first ones that exploit heavy data streaming during game play. It's no longer possible to hold all the texture and geometry data in memory if you want to keep with the current image quality standards. All the newest engines implement really sophisticated data streaming technologies. Data streaming is also something that changed the tools a lot (virtual texturing is a good example here). Most first generation Xbox 360 and PS3 titles loaded all data during the level loading, and that limited the texture resolution and material quality a lot. Now that limitation is basically gone. Occasically you see some minor popping caused by streaming in the newest CryEngine and Unreal Engine, but it's nothing compared to the first console games using heavy data streaming (Mass Effect 1 is a good example).

Good tools save a lot of content development time. Now that the basic technology is working well, the content tool developers have time to focus on features that boost productivity. Now artists have more time to actually spend creating the content and the tools support them better. Commercial 3d-model creation tools have also improved a lot. We now have for example really good sculpting softwares (that autogenerate lots of stuff that was manually generated before).
 
Well, strictly speaking, zbrush has been available well before the current generation, in 2003 or so. This software has played a huge role in making Gears of War possible, without it a lot of the detail work in that game would have been impossible.
But it's also true that the first Zbrush had trouble with polygon counts above 2-2.5 million. Just recently I had to adjust settings for one of our artists because (he's really an artist which means doesn't understand or care about technical details) a simple shirt was subdivided to some 20 million polygons - which was of course completely unnecessary (and very slow to work with).

As for further development, I'd consider stuff like Uncharted 2 adopting filmic tone mapping and such a good example. Doesn't require more performance or memory, but has a very strong effect on the overall results.

There's still a lot of room for this kind of stuff in practically every part of the graphics pipeline - and then there are the vast possibilities in gameplay systems. Ultima VII had 24-hour schedules for an entire game world of NPCs all running on a 386, it shouldn't be that hard to implement something like this in a current RPG. But it takes lots of work and testing to get it right (and yeah, more animations so that they can actually do stuff). I'd like to see features like this, even if it costs like a few MBs of texture memory...
 
What about parallel programming in gaming has that been fully exploited?

Note: Excuse my ignorance if this is a stupid question because my CS knowledge is very basic
 
Last edited by a moderator:
Depends what you mean by exploited.
Some things are easy to make parallel and developers do it.
There is a lot of code where the benefit doesn't outweigh the cost.

I've seen a lot of attempts at general parallel frameworks for games, but efficiency is usually a function of available task granularity, and dependencies become difficult to understand at some point.

Some developers probably do pretty well at it today, others probably just stick to the low hanging fruit.
 
I know the guys who started Snowblind, and while Ezra is a very smart guy the real value in that Studio IMO is their art, they have 2 of the better artists I've ever worked with on staff.
Definitely (I presume you're including Brian Despain?). But that kinda highlights my argument. ;) The same team worked on the same hardware for BGDA and CON. What was it that made CON so much prettier and more involved, with more characters and lighting and shadowing effects? Better use of hardware, no? But once that level of hardware use was reached, there wasn't much further to go regards 'better', only different. Like you say, a choice of render style can elevate a game to gorgeous without being technically demanding, or maybe it needs a technology to implement it like LBP2's lighting.

Very few things impress me technically, I'm old and cynical, I still think the original Grand Tourismo is up there, not for polygon counts or graphics, but because I know how hard it was to put that simulation (all be it a fairly poor one) on that box.
I've seen the animation code for Madden and I think it's impressive, though you'd be hard pushed to find a message board fan who would agree.
I worked on Spore, a game probably with more clever technical solutions to somewhat unique problems than any other I've seen (I can't take credit for any of them), but very few people understand which parts were technicaly complex and which weren't.

I just get somewhat irritated when people equate technical quality with graphical quality, while they are certainly not orthogonal, they are certainly not equivalent.
Absoutely, but that's a common fault with human perception is it not? A lot of what impresses people is kinda shallow and flashy. Many pop stars are crap singers held aloft by technology fixing their voices, but they look fancy and are dressed up and get the attention that a plain vanilla, naturally gifted singer won't get. And Hollywood chucks in loud soundtracks and big explosions to add zing to limp stories and weak acting. As we've pretty much all noticed over the years on this board, innovation in games doesn't get you very far and Joe Public is more interested in what something looks like, so that's where the attention goes. Look at the "Game Technology - best of" thread and it's all about the visuals. No developer is ever going to get credit for a sweet little AI routine that simplifies their crowd behaviour simulation while freeing up CPU time for other tasks, or a cunning use of data packing to make streaming more efficient for their title, or a novel approach to terrain modelling. Such appreciation can only ever come from one's peers.

It is interesting to hear sebbbi supporting the toolchain position too though. From the outside we can see what a developer should be doing, but don't appreciate what it takes to do that. Stories like PS2's painful beginnings are well known, but I guess not many appreciate that the tools can be frustrating and limiting. Which is ironic considering we're all used to tools being a PITA even in other fields! This also points to the future not being tech orientated. The underlying hardware isn't the enabler, but given a decent standard, the tools are far more important in making the hardware do what developers want.
 
...As we've pretty much all noticed over the years on this board, innovation in games doesn't get you very far and Joe Public is more interested in what something looks like, so that's where the attention goes. Look at the "Game Technology - best of" thread and it's all about the visuals. No developer is ever going to get credit for a sweet little AI routine that simplifies their crowd behaviour simulation while freeing up CPU time for other tasks, or a cunning use of data packing to make streaming more efficient for their title, or a novel approach to terrain modelling. Such appreciation can only ever come from one's peers.
...

Speaking for myself, I find it difficult to appreciate things that are not visually oriented because I don't not have the experience to pass judgement on them. I can get some sense if a developer has done something out of the ordinary, but usually it's through visual cues, because I don't have experience in game development and have never had to write an AI routine, or model terrain. Whether people are more interested in graphics, I don't know. I just don't think they really can understand the rest of it, without experience, so that's understandable. Like you said, I think appreciation for the other tech is probably only going to come from other people in the industry, because they'll understand how difficult it would be to do certain things, through experience. The average person is probably going to equate how "technically proficient" a game is based on the parts they can qualify, like the graphics.

Things like AI are tricky. One game could have an incredibly more complex simulation of behavior, and people may not notice. There are a lot of games that have their names thrown around for having the best AI, but I can honestly say I've never seen much difference between them. Maybe some I think are a little better than others, but I understand that's largely subjective as a gamer, and it says nothing about how much work, and how much complexity was involved in achieving the result. Also the unpredictable nature of AI makes it hard to understand what is coincidental and what is intentional. Did the enemy just flank me, or did it just happen to move that way?

I'm a huge fan of physics simulation in games. I think it makes games dynamic, and adds an element of mystery and surprise. Still, it is something I can understand because of the visual component. I understand the complexity as a function of the visual representation.

This may seem somewhat off topic, but just my two cents on why people will always be looking to the visuals as a metric for understanding how far a console system is being pushed.
 
Back
Top