Are current gen hi-def consoles near to their achievable limit

Commenter

Newcomer
I know when you're talking about a console's power it's pretty much impossible to quantify, because it depends on the efficiency of the coding and assets behind it, but in real world terms there is obviously a limit to graphical quality (I'm talking about graphics because it is usually is the benchmark of a console or pc's power) of a game achievable on a console.

My question is, do you think that developers are pushing near the achievable level of fidelity on current gen consoles or there is still a lot more juice to be gotten out of them. LA Noire, seems quite a huge leap from the very first-gen titles for example.

The PS3's gpu is a basically a 7800GTX, which is ancient by today's standards, yet being a closed box environment developers can really push the hardware to its limits, but do you think that they are close to that limit yet?
 
Last edited by a moderator:
I have it on good authority that the G7x series is really bad with dynamic branching. But anyways, it says something about RSX when you have a number of presentations going over their usage of Cell to take over operations that would normally have been on the GPU. I'd wonder if Cell is being "tapped" considering the nature of multiplatform development though.

The real indicator would be looking at titles from the same developers (or series) and seeing what changes/leaps there are.
 
I think it's all about using the strengths of what each console can do.
But what defines what a console is capable of, is an open world environment.
Look at red dead redemption on xbox 360, 720p with fantastic lighting and weather effects with day to night transition yet the PS3 version can't manage 720p.
Yet the PS3 can produce something like god of war 3 and killzone3.
The first party titles cleverly use the strengths of the console to hide it's faults.
 
I think it's all about using the strengths of what each console can do.
But what defines what a console is capable of, is an open world environment.
Look at red dead redemption on xbox 360, 720p with fantastic lighting and weather effects with day to night transition yet the PS3 version can't manage 720p.
Yet the PS3 can produce something like god of war 3 and killzone3.
The first party titles cleverly use the strengths of the console to hide it's faults.

I dissent (with respect) about the open world environment 'meme' how the best showcase, because more open implied different approach to the others genres. Could be interesting to know how work the Rage engine on the ps3, just to see if possible any further optimizations or not.
 
Last edited by a moderator:
I don't know if 'at their limit' is the proper term. I think there's a lot to be gained from refining efficiency or using hardware in new ways.

But if you mean in comparison to current PC power, obviously. That doesn't mean that there aren't graphically impressive games coming out for the PS3/360.

It's the nature of a closed system vs an open one where the next latest and greatest is never more than 6 months out.
 
I think it's all about using the strengths of what each console can do.
But what defines what a console is capable of, is an open world environment.
Look at red dead redemption on xbox 360, 720p with fantastic lighting and weather effects with day to night transition yet the PS3 version can't manage 720p.
Yet the PS3 can produce something like god of war 3 and killzone3.
The first party titles cleverly use the strengths of the console to hide it's faults.
I guess we'll see with open world game Infamous 2 right around the corner. I believe they are shooting for 60fps at 720p. I think, as far a s clock cycles are concerned, PS3 has just reached it's limit with these batch of AAA 1st party games.
 
I can't wait to see infamous 2 as I loved the first but if Sony has reached the max that PS3 can go to then I hope they can keep the consistency with the rest of their first party titles.
I have never had it so good with titles like lbp2, killzone 3 and GT5.
 
I can't wait to see infamous 2 as I loved the first but if Sony has reached the max that PS3 can go to then I hope they can keep the consistency with the rest of their first party titles.
I have never had it so good with titles like lbp2, killzone 3 and GT5.
That's a different statement. I don't believe the PS3 is maxed out, due to the speed benefits of just optimizing on the Cell architecture. According to devs like GG and ND, they have reached 100% clock cycle usage. Supposedly, that's the hardest part of working with Cell. Wasn't ND the first to use all the available clock cycles on the Cell with U2? I'm thinking PGR4 might have been one of the first on the 360. PGR4 was sweet!
 
I know when you're talking about a console's power it's pretty much impossible to quantify

As you say, almost impossible to quantify. I thought they were all done and cooked, but then Crysis 2 comes along with realtime gi. Who'd have thought that was possible on this hardware? I sure didn't, but there it is. But it's stuff like that which extends the life of these ancient machines a little bit more every time because now you can get some clever new looks and new game designs once that tech gets into the hands of other studios. I'd think that's about it though, I can't see where else they can go. Plus other details have basically been at a standstill for years and are unlikely to change like texture filtering, polygon count, etc...


I have it on good authority that the G7x series is really bad with dynamic branching.

Among other things :) Shader branching on ps3 is a non no. You instead take all your branching cases and make a separate shader for each one. So if your shader had one branching case then you make two shaders for it, or if you had two branching cases then you make four shaders, etc. Then every frame you determine which branching set a given mesh will fall under, and then slap the correct non branching shader onto it before feeding it to the gpu. It takes more memory and cpu time to do that, but if you don't and instead rely on rsx to handle branching then your frame rate is toast. Note that I've simplified here, naturally you can't just feed it to the gpu as you still need to first spend more cpu time fixing the other things rsx is bad at like shader patching, etc, before you actually feed the mesh to gpu but you get the point.


Wasn't ND the first to use all the available clock cycles on the Cell with U2? I'm thinking PGR4 might have been one of the first on the 360. PGR4 was sweet!

Strictly speaking it's unlikely anyone will ever max out the spu's even if they show 100% cycle useage because the spu's are dual issue. You can get as high as 0.5 cycles/instruction under the right circumstances, but I doubt any game will ever do that because of the limits at succeeding at dual issue, and the types of things games do. You can get 100% spu cycle useable on a performance chart yet still be just at 1.0 cycles/instruction, which technically is half as much as the spu's can do.
 
It actually might be good if the engines stabilize and don't need that much more development. Perhaps we hit the golden era where new kinds of games and gameplay mechanics come out.

Who would really want to play yet another cod or nhl with just better graphics? Bring on some innovation on gameplay and I'm one happy camper.This innovation might be easier if the money doesn't need to be spent to even more graphics assets or yet another new funky engine.
 
No, but I think this year they will finally take full advantage of the PS3 in a few titles.

Yeah, that's right! You heard me, 2011 is the year of the pstriple!
 
I hope so. It's about time the cell delivered.

Cell has been delivering just fine. It's just held back by the RSX ... :LOL:

I think the question should probably split in 1st party and 3rd party. I get the impression that first party is starting to get close to the limit, but third party still has some ways to go.

However, I do not doubt that there are still some really amazing things possible by people who make a radical shift in how they approach graphics. The nice thing about software is that it can reinvent itself totally. I still feel for instance that very little use is being made of streaming for AI and such, which could lead to much more interesting gaming worlds and behaviors. I'm worried that it is more of a playtest issue than anything else though - nobody dares create a game with unpredictable behavior. We're already lucky that people like Media Molecule dared to attempt LBP and LBP2's creation modes.
 
I don't know if 'at their limit' is the proper term. I think there's a lot to be gained from refining efficiency or using hardware in new ways.

This is why we'll be seeing better looking games coming out all the way up until the end of the current console generation. Unless developers just give up trying.

It's all about finding optimizations, efficiencies, shortcuts, etc. to either do things faster or make things look better.

In the first case trying to do X graphical thing faster with as little image sacrifice as possible (MLAA for example being a good compromise between quality (slightly decreased versus MSAA) and speed (greatly increased) and flexibility (easier to AA with deferred lighting)) or by increasing the quality of a graphical effect with as low of a speed hit as possible (Cryteks use of GI, for example).

At this point devs will have a good grasp of each respective console. And now it's all about finding those all important shortcuts, optimizations, etc.

According to devs like GG and ND, they have reached 100% clock cycle usage. Supposedly, that's the hardest part of working with Cell. Wasn't ND the first to use all the available clock cycles on the Cell with U2? I'm thinking PGR4 might have been one of the first on the 360. PGR4 was sweet!

It really isn't that difficult for any developer on any console to achieve 100% clock cycle useage.

How you use those cycles is far more important that that you used them in the first place. For example, having each core/SPU doing a loop of a complex mathematical equation can get 100% clock useage but probably doesn't do much for a game. :) Prime95 loads my CPU cores quite nicely but isn't particularly fun to play or watch. :)

Regards,
SB
 
You should try superPi
If the cell is such a monster wouldn't it make sense for Sony to use it in PS4 ?
I can't help but feel they will go for an off the shelf CPU for PS4.
they appear to be building all of these tools for cell so why waste it to start again?
With an updated cell, faster bluray drive and great GPU, the PS4 could bs an absolute monster but then we don't know what MS has up their sleeves with a directX 11-12 console.
 
If the cell is such a monster wouldn't it make sense for Sony to use it in PS4 ?
I can't help but feel they will go for an off the shelf CPU for PS4.
they appear to be building all of these tools for cell so why waste it to start again?
With an updated cell, faster bluray drive and great GPU, the PS4 could bs an absolute monster but then we don't know what MS has up their sleeves with a directX 11-12 console.

It all depends on if R&D is continuing on Cell, and it doesn't appear to be. IBM will be incorporating elements of Cell into their Power CPUs. Toshiba isn't doing R&D and I don't think Sony is either as they've mentioned they don't plan on shrinking Cell any further. So it's relatively stagnant meaning any advantages it has will likely be eclipsed by 2014.

Regards,
SB
 
It's all about finding optimizations, efficiencies, shortcuts, etc. to either do things faster or make things look better.
[...]
At this point devs will have a good grasp of each respective console. And now it's all about finding those all important shortcuts, optimizations, etc.
Does this take 10 years though? Developers know on day 1* what kind of hardware they're dealing with. I haven't seen many advances in terms of visual fidelity over these past years, and there will be even less during the second half of the console cycle.

*actually much earlier than that, because release games begin their development way ahead of the console launch
 
I think it's a meaningless question.
Developers certainly learn from previous experience with a platform, and 2nd titles are often a jump over 1st titles, though in practice that's as much a function of tight launch windows and lat hardware visibility as it is anything else.

I mostly disagree with the forums opinions of technical superiority any way, pretty graphics are much more a function of art direction and good choices during asset creation than they are a function of technical merit.
When I did my first game port circa 1989 I realized how little good technology or software engineering had to do with pretty games.

Sure there are new techniques developed, but these days I think that has less to do with hardware and more to do with algorithmic development.

I think what does improve through console life cycles is the tools, and that probably has as much effect on quality as anything else. Ask any AAA product developer how many lines of code in the tools they use to build a game and how many in the actual product.
 
Does this take 10 years though? Developers know on day 1* what kind of hardware they're dealing with. I haven't seen many advances in terms of visual fidelity over these past years, and there will be even less during the second half of the console cycle.

*actually much earlier than that, because release games begin their development way ahead of the console launch

Umm, for example, the X360 dev kits were supposedly X800 based, like half the power of the real thing.

Most titles at X360 launch looked like HD past gen games, notable exception being PGR3.

The difference between first gen X360 software and say, Uncharted 2, Crysis 2, Bulletstorm, is staggering.

I still remember the various "tiers", the first game to really impress people as "next gen" on 360 was in spring 2006, it was GRAW...a poor looking title today. Then people were blown away in fall 2006 by Gears of War, obviously the best looking console game yet, easily surpassed today. Gears probably really was the first game imo to look "next gen". I dont think IIRC Heavenly Sword or Motorstorm had come out on PS3 yet. Killzone 2 imo was the next title that really pushed things in a big way.

I think to me the biggest evidence yet the gen is becoming "tapped" is the fact Killzone 3 and Uncharted 3 look mostly similar to their groundbreaking predecessors imo, with only relatively minor incremental improvements. Especially since PS3 had been the console that was really pushing the envelope with KZ2 and UC2 as standard bearers.

I think Crysis 2 is pushing things maybe even a little farther, maybe it can join the likes of gears 1 and killzone 2 as big generation graphical touchstones, we'll see when it comes out. But I dont think it's just a massive leap over the, killzones, in a way.

But still, compare C2 to what was on 360 at launch, and I cant even imagine how much my back then-mind would have been boggled, lol. Because I still remember Gears 1 graphics just blowing my mind. I still remember Neogaf and myself gawking at Rainbow Six Vegas and the likes of Kane and Lynch, again all average looking today. The first Assassins Creed real time gameplay was similarly amazing. Again would be nothing today. Hell, Modern Warfare 2 initially traded on graphics appeal, I remember breathlessly posting about it here, remember those first "photorealistic" Game Informer scans? And I remember Neogaf saying IW must have made a deal with the devil to get it looking so great at 60 FPS...now COD titles look darn rough for the most part to me.

So yeah, I do see signs of things becoming tapped, to make a long story short. I dont see the big leaps anymore, just smaller ones. I look at Crysis 2 where they're pushing everything to 11, and see a lot of limitations, especially in RAM, even as imo it's the best looking now gen title yet. We're still seeing incremental improvements but certain RAM related areas seem like a dead end. Battlefield 3 is another example of a game that looks like it will push things a little bit, incrementally, yet the background buildings still scream of low RAM.

It's hard to say what the future holds though. I remember some football game on the SNES, towards the end, really being like the first game with huge players. You still might see some sort of groundbreaking graphics on this gen, but it would likely be in some unforseen, limited style. If you could limit things enough in scope, you might be able to really do some things.
 
Last edited by a moderator:
Back
Top