Future console discussion and thoughts

I think this is false.

If you ever actually check out screens (even better, video) of Xbox 360 games tabbed by the media to look like their Xbox counterpart, you will find there is usually in fact a vast difference. You can try with say, Madden. It was quite eye opening.

In other words, they lie.

Look at it this way, if nothing else, do you think it's really possible to engineer similar looking games on a box with 512 RAM versus 64, even if all else was equal, unless you tried really hard?

The 512 box is going to have massively better qaulity textures, at the least.


Indeed they did look better and anyone comparing them side by side would agree but you also have to admit that the difference isn't big enough for most to notice initially or without those details being pointed out.

Point is the jump wasn't as big and obvious as ps1 to ps2. I expect that by the end of this generation it will be even more dificult to tell the jump to next gen.
 
Indeed they did look better and anyone comparing them side by side would agree but you also have to admit that the difference isn't big enough for most to notice initially or without those details being pointed out.

Point is the jump wasn't as big and obvious as ps1 to ps2. I expect that by the end of this generation it will be even more dificult to tell the jump to next gen.

Check my Madden pics. I think your statement is at best, debatable. Otherwise, why do Wii games already look so dated, and are constantly commented on as such?

You could probably make an argument the jump is bigger this time than ever before. And again, imagine what could be done with 360/PS3 at 640X480 (aka, level playing field, or maybe force the previous gen to do 720P, for more laughability).

I think it is at times more difficult to tell the difference this gen, but I still attribute much of that to the resolution increase. It's not a factor that can be ignored..
 
Check my Madden pics. I think your statement is at best, debatable. Otherwise, why do Wii games already look so dated, and are constantly commented on as such?

You could probably make an argument the jump is bigger this time than ever before. And again, imagine what could be done with 360/PS3 at 640X480 (aka, level playing field, or maybe force the previous gen to do 720P, for more laughability).

I think it is at times more difficult to tell the difference this gen, but I still attribute much of that to the resolution increase. It's not a factor that can be ignored..

There was a resolution increase between ps1 and ps2 as well. I think Gran Turismo is a prime example:
http://www.mondemul.be/screens/roms/psx/Gran Turismo 2-PSX-PAL-CD1.jpg
http://www.theautochannel.com/mania/video-games/images/Gran_Turismo_2_3.jpg

http://www.thunderbolt.be/reviews/ps2/granturismo3_3.jpg
http://www.futuregamez.net/ps2games/gt3/gt3_3.jpg

The jump from ps1 to ps2 was huge because it wasn't just a 4x increase in resolution, it was going from such a low point graphically into a more acceptable range and in this generation I think it will be established even further that ps3/x360 graphics are acceptable for most people and they will have to use other advantages to lure new customers.
 
There was a resolution increase between ps1 and ps2 as well. I think Gran Turismo is a prime example:
http://www.mondemul.be/screens/roms/psx/Gran Turismo 2-PSX-PAL-CD1.jpg
http://www.theautochannel.com/mania/video-games/images/Gran_Turismo_2_3.jpg

http://www.thunderbolt.be/reviews/ps2/granturismo3_3.jpg
http://www.futuregamez.net/ps2games/gt3/gt3_3.jpg

The jump from ps1 to ps2 was huge because it wasn't just a 4x increase in resolution, it was going from such a low point graphically into a more acceptable range and in this generation I think it will be established even further that ps3/x360 graphics are acceptable for most people and they will have to use other advantages to lure new customers.


By and large unless I'm mistaken, both PS1 and 2 operated at 640X480 (or 640X240) or whatever it "really" is. Point is, they operated on SDTV.

I am not sure at all what the point of your pics is. Unless you are sure those are framebuffer grabs they dont tell us anything about the res of the game.

Some PS2 games operated at various resolutions. Some rendered at less than 640, then upscaled, in much the same way PGR3 does upscaled 720P on 360. Also the PS2 GT had some sort of faked 1080i (it was not true 1080i).

Where do you get 4x jump in res PS1 to PS2, and what is your source on that? That is just way out of left field. It would mean PS1 rendered at 140X120 or something.
 
PS34 could have 4 Cells ( The DP revision of Cell ) and do it all in software. I honestly think that by the time the NEXT generation of console's are released the GPU will be all but dead and most things will be software based just lke the old days.
 
Next Gen bottlneck is the same as this one: Artists. I gotta go quote myself on that one from 2 years ago :smile: Currently I think the 2 "best" looking games are Team Fortress 2 and Trusty Bell. Good use of the available technology and used in a way that accents the strengths of the current tools and hardware.

While it is true 8x increase in textures, polys, etc may see a diminishing return in regards to what consumers can see/care about, I think that graphics is in need of new techniques and approaches for both ease of use and development as well as "oomph!!" AA and Filtering are obviously nice, but subtle canidates, but I think the area in most dire need of improvement visually per frame, imo, is shadows and lighting. Every "next gen" game I see without self shadows makes me cringe, as well as many games with flat, boring lighting. But for overall graphical impact I think animation and environmental interaction could go even further. The first eyesore I spot in a game like Madden is animation. Ditto FPS--when a grenade hits a building, and nothing happens... ugh.

I already posted about possible hardware projections, so I won't bore anyone. But I would say Laa-yosh's comment about cost reduction is real and how Sony/MS respond is anyones guess. I would not be surprised though to see one company go for a design that is GPU-centric and the other a more CPU-centric. And we will see some areas blur, e.g. the former doing physics on the GPU primarily and the later on the CPU.

It would be nice to get some solid framerates as standard on top of high levels of AA and filtering. Anyhow, I think next gen will be a lot about middleware (like UE3), services (like XBL), and user interfaces (like Eyetoy2, Wiimote).

And who knows, maybe next gen will be further away. By going with the 40% cost reduction on flash memory, 2010-11 seems slightly too close. And while I expect an optical drive in the new consoles, I don't think it will be used quite like how DVD and Bluray have been. A longer console life cycle undos some of the issues of diminishing returns, but it does raise issues of the market sagging at the tail end and the frustrating transitition time. And as fast as some things are slowing down, seeing GPUs with 500GFLOPs of performance, approaching and soon exceeding 100GB/s of bandwidth, and a ton of new features and high quality filtering and massive amounts of texturing power does give some hope. Something like G80 outclasses both consoles graphically by leaps and bounds. GF3 and the Ti4200 straddled the Xbox1 launch and neither did that to NV2A. There were not many things I would expect from a Ti4200 that I would not from the Xbox1, but I think G80 can do things graphically that neither console could realistically pull of to a degree of quality that would be reasonable. 2011, 5 years from now, is a long time for technology. 2001 => GF3/4 (NV2A), 2006 => G80; 2011/12 => ? We may even see a design invest more transistor budget into the GPU due to flexibility improvements. Who knows... will be an exciting 5 years.
 
there's no such thing as diminishing returns as far as the potential of realtime graphics. the idea of diminishing returns gets into people's heads because console GPUs have not made an enourmous leap from last gen to this gen, like they did from the PS1/N64 gen to the PS2/Xbox/GCN gen.

realtime graphics are limited by what the silicon can do and what developers can get out of it, but there is no actual diminishing returns for what is possible with realtime graphics, if the hardware keeps advancing.

we do need a very large leap in image quality for the next-gen. not merely more polygons & shaders. perhaps also, something beyond shaders as far as rendering, like GI of some kind even if its a cheat so that realtime version of GI can be done.
 
PS34 could have 4 Cells ( The DP revision of Cell ) and do it all in software. I honestly think that by the time the NEXT generation of console's are released the GPU will be all but dead and most things will be software based just lke the old days.


totally disagree. 4 CELLs will not be as good as even a lowend GPU as far as graphics rendering. the GPU is not going to be dead next-generation. first would have to come a new class of processor, a CPU-GPU, and CELL is not there now and probably wont be in 5-6 years, unless Sony does that CELL-based GPU (Visualizer) that we saw in patents back in 2003. even then, a dedicated GPU from Nvidia or ATI would rip 4 next-gen CELLs to shreds in graphics rendering. CELL isnt designed for graphics rendering, although it CAN assist the GPU in certain areas.
 
This is why I think the ps4/x720 generation will be more about adding "life" to the game world than visual impact in a single screenshot. By "life" I mean things like concentrating heavily on physics, animation, ai, and subtle detail. These are the things that will give Sony and Microsoft a great "bang for the buck" in differentiating their new offerings in 2010.

Personally after looking at things like Crysis (and specialy after thinking in what a last gen 360/PS3 game should do) I ask for myself if even in things like physics, animation and subtle detail will actually see such a big jump as one might think initialy. I mean in some games (crysis) almost everything is interactive.

I do belive that in next gen there will be a good and noticiable improvement in all the traditional areas, but I think it will be only a "step" futher instead of a jump.

This lead me to think that the big improvements will be in no traditional areas like interfaces or AI.
 
PS34 could have 4 Cells ( The DP revision of Cell ) and do it all in software. I honestly think that by the time the NEXT generation of console's are released the GPU will be all but dead and most things will be software based just lke the old days.

There won't be a PS34;)

Soooooo....are you saying PS4 will use CELL processors and a 2D graphics chip? CELL x 4 + 2D = PS4D? :LOL:
 
Personally after looking at things like Crysis (and specialy after thinking in what a last gen 360/PS3 game should do) I ask for myself if even in things like physics, animation and subtle detail will actually see such a big jump as one might think initialy. I mean in some games (crysis) almost everything is interactive.

I do belive that in next gen there will be a good and noticiable improvement in all the traditional areas, but I think it will be only a "step" futher instead of a jump.

This lead me to think that the big improvements will be in no traditional areas like interfaces or AI.

I think it would be cool if say, Sony, went that route, focused on AI, Physics, new controls, but not graphics, maybe a slightly clockbumped RSX only, but lets say 4+ Cell's for tons of physics, in PS4. And say, microsoft, really just went all out on graphical power on Xbox720 (10X Xenos), thereby giving gamers the best of both worlds, so to speak, depending on which console they choose, or both.

I really kind of think Sony has been leaning that way anyway, with certain statements. This would leave Xbox as the sole "graphics" platform in the future, and Sony+Ninty the "fun+Physics (but not great graphics) systems.

Since as you say, graphics aren't increasing that much, it shouldn't be a big problem for Sony in that regard. Sure Xbox720 would have more bells and whistles, but would Joe consumer really be able to tell Xbox 720 was 10X more powerful GPU than PS4? That's the way I think it may go in the future. What I see would be something like:

PS4: 8 to 12 cells, RSX clocked at 700 mhz (~300m transistors for RSX)

Xbox 720: 3.5+ billion+transistor ATI chip, 1.5-2.0+ ghz ATI GPU speed, Intel quad/8 core kentsfield 3 ghz+

This would give Sony a huge cost edge, and I'm not sure laypeople would be able to tll the grahical difference immediatly because of diminished returns. But the 8-12 cells would allow huge gigaflops of physics, AI, etc. Every extra transistor Sony has would be thrown into moreCells, rather than a diminishing returns GPU rat race, and Cell is some amazing technology, so I would see them going with all their eggs in that basket.

I am saying 3.5 billion on the Xbox 720 chip, it sounds like a ton, but my thinking is, 90 nm started around 300m on GPU's (G71/R580 class) but then doubled to around 600-700m (G80/R600 class) on the same node. So I am figuring two transistor doubles per node (one at the start, then a second at process maturity) so say, 65 nm starts at 1.2b, then ends at 2.4, so, you quickly get to very high figures, even though 2010 may only see 45nm node.

Say that might lead to, 600-800 shader ALU's on Xbox720 clocked at 1.5ghz+ (the way GPU clocks are scaling), but even so, will a layman be able to tell between that and RSX? I'm betting Sony thinks not.
 
Last edited by a moderator:
Why would Xbox720 use Intel processors? Why not just use a 6 PPE core cpu? What's to stop ATI from have physics acceleration in a Xenos II?
 
Why would Xbox720 use Intel processors? Why not just use a 6 PPE core cpu? What's to stop ATI from have physics acceleration in a Xenos II?

I dont know..that's more my preference. The Xbox360 book though, does mention that after IBM had all but won the contract, Bill Gates made a personal last ditch appeal to see if there was any way they could fit "old friend" Intel CPU in the box. Of course there wasn't, but you get the point, I think Intel is as likely a choice as any..of course the real stumbling block there is IP ownage..

A 6 PPE core CPU would prolly be a bit on the light side for 2010, as well..I mean I would think you'd be talking at least 12 cores in that case, which would still be well below moore's law.

Physics accceleration in Xenos II..I dunno, I'm just laying out the theoretical, microsoft being the "graphics" player in my scenario. They could use CPU power for physics. Besides, Xenos II physics acceleration would probably be more like in G80..it's possible, but you dont have to use it, the whole chip can just as well be dedicated to graphics..
 
there's no such thing as diminishing returns as far as the potential of realtime graphics.

realtime graphics are limited by what the silicon can do and what developers can get out of it, but there is no actual diminishing returns for what is possible with realtime graphics, if the hardware keeps advancing.
Do you understand what 'diminishing returns' means? It means for a given amount of progress on the silicon side, the visible improvements get proportionally less and less relative to the previous abilities. 10,000 poly models are a huge improvement over 1,000 poly models. 100,000 polys are less of an improvement, but a noticeable one. 1,000,000 is a smaller improvement still. 10,000,000 won't be noticed. 100,000,000 likewise. Thus for each order of magnitude advancement in GPU performance, the returns are getting less and less...they're diminishing. Similarly no AA > 2xAA > 4xAA > 8xAA > 16xAA > 32xAA > 64xAA > 128xAA > 256xAA > 512xAA... As you increase the amount of AA, the GPU requirements increase dramatically, while the visible improvements become less and less.

That's what diminishing returns means, and it's definitely a matter for realtime graphics.
 
All I care about is productivity and cost control in the future consoles.

Current gen's developing cost is frustrating. The explosive cost inflation is the major downside of nowaday's video game industry. Unless we can find a niche market, i.e. the game counterpart of art house cinema line, we will see more creative beings like Clover perished.
 
Won't that side move onto smaller download titles though? Okami could have been created as a simpler game, perhaps episodic to gauge development. You create your unique idea as a small download game, and if it sells well, create episodes to further the story. That'd also be better economically as rather than investing in masses of content for a large game with no idea on returns, you get returns all through the games creation with the release of each episode, and can pull the plug the moment sales drop. That way, if an 80 hour epic would cost lots to make, but the players would be content with 40 hours, you can stop after perhaps 2 or 3 episodes instead of creating 5-6, not developing content beyond what's wanted.
 
That way, if an 80 hour epic would cost lots to make, but the players would be content with 40 hours, you can stop after perhaps 2 or 3 episodes instead of creating 5-6, not developing content beyond what's wanted.

The issue is the FTQ (first time quality) of the software products,and the lead time.
This is the two key item t the episodic content.(we can call it as JIT software development)
 
I dont know..that's more my preference. The Xbox360 book though, does mention that after IBM had all but won the contract, Bill Gates made a personal last ditch appeal to see if there was any way they could fit "old friend" Intel CPU in the box. Of course there wasn't, but you get the point, I think Intel is as likely a choice as any..of course the real stumbling block there is IP ownage..

A 6 PPE core CPU would prolly be a bit on the light side for 2010, as well..I mean I would think you'd be talking at least 12 cores in that case, which would still be well below moore's law.

Physics accceleration in Xenos II..I dunno, I'm just laying out the theoretical, microsoft being the "graphics" player in my scenario. They could use CPU power for physics. Besides, Xenos II physics acceleration would probably be more like in G80..it's possible, but you dont have to use it, the whole chip can just as well be dedicated to graphics..

If anything MS will use AMD before they use Intel again considering AMD and ATI are under one roof now. Also a 6 PPE core needs to be fabbed on 45nm or 32nm to be within cost parity with current console chip costs so 9 PPE cores isn't gonna work and 12 PPE cores takes that into the realm of LaLa Land.
 
All I care about is productivity and cost control in the future consoles.

Current gen's developing cost is frustrating. The explosive cost inflation is the major downside of nowaday's video game industry. Unless we can find a niche market, i.e. the game counterpart of art house cinema line, we will see more creative beings like Clover perished.

If they dont push that much the gfx area (ie not much more work in creating content than in this gen) I would expect that (at least in content creation) cost would decrease because of: tools and management finally ready to work in such big numbers; reusing of old models; brand new solutions for such work.

Won't that side move onto smaller download titles though? Okami could have been created as a simpler game, perhaps episodic to gauge development. You create your unique idea as a small download game, and if it sells well, create episodes to further the story. That'd also be better economically as rather than investing in masses of content for a large game with no idea on returns, you get returns all through the games creation with the release of each episode, and can pull the plug the moment sales drop. That way, if an 80 hour epic would cost lots to make, but the players would be content with 40 hours, you can stop after perhaps 2 or 3 episodes instead of creating 5-6, not developing content beyond what's wanted.

Wouldnt that be true for every game, dont take me wrong I do think I could very well like of episodic content (althought I dont really like the way it is being made by valve). Why not sell each of the Halo levels or Obvilion quests in episodic content? That dont sound like a good idea mainly because a game developed to be a full game or to be episodic content must be made in a diferent way (just like movies and TV series).

This also have some problems (and some vantages too), for example a bad level in a game will not hurt the number of games sold, but it can hurt severely the number of next episodes sold.

Anyway this isnt a a soluction for everything.

The issue is the FTQ (first time quality) of the software products,and the lead time.
This is the two key item t the episodic content.(we can call it as JIT software development)

Acrually I usually find (in anything that is sold "episodicaly") that he first "issue" is the worst one.

If anything MS will use AMD before they use Intel again considering AMD and ATI are under one roof now.

Personally I think that a AMDFusion would be a good choise for MS as it would give them a good help once it would be all x86/DX, it easly offer a architeture well know by devs (if based on a PC one), it could have a better deal ($), high end tools/engines from PC could be ported effortless to the console and the aPUs could give a good boost to performance and easy of use to some areas (althought losting flexibility).
 
Back
Top