Predict: The Next Generation Console Tech

Status
Not open for further replies.

Acert93

Artist formerly known as Acert93
Legend
NOTE: This is all for fun. Enthusiast speculation, nothing more. Try to keep all speculation based in reality, and if someone makes an error or silly assumption just point out why you disagree and move on. If you are already having a heart attack feel free to exit the thread :D

For those who want to cut to the chase. Acert93's general Next Gen Console GPU:

Method #1: Guestimate based on relative difference in GPUs over a 5 year period
Transistors: 1,500M transistors
Frequency: 1.3GHz
And nearly 700 shader ALUs and 72 TMUs.

Method #2: Guestimate based on projected silicon process changes over the next 5-6 years
Transistors: 2,400M-3,000M transistors
Frequency: 1.1GHz

Wild Card: Dual GPU Console; All-Cell Console

Memory: 4GB of system memory

Are these unrealistic in your opinion?
What is your prediction? Why?

My notes to follow...
 
Last edited by a moderator:
"What?" you say, "Next Gen has just started! We wont be seeing new consoles for nearly 5 years".

My point exactly! The Next Gen consoles are now "old news". RSX's base technology was in PCs in 2005, Xenos will have a super shader engine adaptation come Vista, and there are already new revisions of Cell appearing. So time to move onto more interesting technology ;) Unlike the PC world, which has yearly updates, we are kind of left projecting 5-6 years out which is an eternity in computer technology. This poses a couple problems. There are design advancements (think Cell, unified shading designs, SM3.0, etc) and manufacturing process advancements. The former will be the major focus of my post.

I am prepared to get skewered here and my lack of fine grained technical knowledge to be lambasted by the experts here. Good! Just make sure you provide your own prediction ;)

Some assumptions (you can create your own assumptions for your own prediction, don't feel bound by mine):

* There are typically 5 years between console cycles, although stretching to 6 years is not completely unknown. The 360 shipped in 2005 and PS3 in 2006, so we can expect another console from MS in 2010 or 2011 and Sony in 2011/2012.
* Solid information about the consoles hits 6-12 months before the launch. That means real speculation will start in 2009 when we see the first real leaks.
* Contracts with GPU and CPU makers will begin to be heard in the news in late 2007, early 2008. (We even heard about the first steps toward Cell in 1999).
* Sony and MS are already thinking about the PS4 and Xbox 3. So why can't we?

What are our general targets and expectations for the next next-generation of hardware? To arrive at reasonable predictions we need to consider how past chips were made and how future chips in the allotted timeframe will be made. Chief considerations:

* Manufacturing technology and method
* Wafer Size
* Transistor Density
* Architecture

I can think of two methods that use the above data points to guestimate where we will be, in regards to silicon budgets, come next next-generation:

(1) Looking at the relative difference in chips over a 5 year period; or
(2) Look at the projected silicon process changes over the next 5-6 years

---

(1) Looking at the relative difference in chips over a 5 year period

The easiest comparison is NV's contribution to the console gaming space simply because (a) both chips were at a 5 year interval and (b) both were PC related products, minimizing certain deviations.

NV2A
Frequency: 233MHz
Process: 150nm
Transistors: ~60M (?)
Die Size: 144mm^2 (?)
PS: 1.3
VS: 1.1
Textures: 8
Pixels: 4
Z Samples: 4
Textures per pass: 4
Vertex Shaders: 2
General API: DX8.x

4 Pixel Pipeline with 2 Texture units each

RSX (Guestimated from G71 + E3 2005 info + Later slides and "other" documents)
Frequency: 550MHz
Process: 90nm
Transistors: ~300M
Die Size: 196mm^2 (?)
PS: 3.0
VS: 3.0
Textures: 24
Pixels: 16
Z Samples: 32
Textures per pass: 16
Vertex Shaders: 8
Fragment Pipe: 2 ALU's per Texture Pipe, each capable of Vec3 + Scalar or Vec2 + Vec2
General API: DX9c (SM3.0)

In 5 years Nvidia was able to make the following improvements:

* 136% increase in chip frequency
* 33% increase in die size
* 500% increase in transistors
* 365% increase in transistor density (NV2A: 0.42 tran/mm^s; RSX: 1.53 tran/mm^s)

Although there are significant differences in architecture, roughly we can state:

* 12x as many Pixel Shaders (RSX 24 Fragment Shaders, each with 2 ALUs and mini ALUs; NV2A 4 pixel pipelines)
* 4x as many Vertex Shaders (RSX: 8; NV2A 2)
* 2-4x as many ROPs (RSX: 8 or 16; NV2A: 4)
* 3x as many TMUs (RSX: 1 TMU per 1 Pixel Fragment Shader for a total of 24; NV2A: 2 TMUs per 1 Pixel Pipeline, for a total of 8)

Rough Estimates if GPUs continue on a linear path for chip constraints and transistor budgeting for current features and performance design:

* 1300MHz (550 * 2.36)
* 260mm^s die
* 1,500M transistors
* 672 Shader ALUs ([48 PS + 8 VS] * 12)
* 16-64 ROPs
* 72 TMUs

Of course we can expect some significant design changes. e.g. 16 ROPs at 1.3GHz is over 20Gpixel/s fillrate. So architecturally 64 would appear overkill. We also may see more consolidation of the various parts of the GPU in the future. On the other hand Shader ALUs and associated logic will be larger. e.g. D3D10 calls for bitwise and Integer support and there is room for better dynamic branching in the hardware. Yet RSX has much more robust shaders than NV2A, and yet still has substantually more of them.

(2) Look at the projected silicon process changes over the next 5-6 years

Another idea is to consider future progress in the chip manufacturing market. 5-7 years ago GPUs were definitely not cutting edge, but they are slowly catching up in using progressive processing technology as well as consuming more die space.

* Manufacturing technology and method => I expect they will still be silicon based. We may see more novel approaches and the growth of technologies like MRAM and ZRAM for larger, faster caches that use less power and consume less space than current SRAM.

* Wafer Size => 400mm / 450mm wafers should be online in 2010-2012, allowing for 2x+ as many dies per wafer.

* Transistor Density => In general it looks like the general market is transition as follows (with Intel with a ~1 year head start)"

90nm 2005
65nm 2007
45nm 2009
32nm 2011
22nm 2013
16nm 2015

* 2011 / 32nm would in theory be 8x transistors per mm^2 (yet the same drop for NV2A/RSX did not even see a 4x improvement)

* With a 33% increase in die size that would be roughly 10.5x more transistors in theory (Each step is double the transistor density in theory)

* Intel is aiming for 45nm in H2 2007

* Assuming a modest bump in die size (which is not unreasonable considering RSX is a "small-ish" flagship GPU) and 3 full process drops in 5-6 years, we can expect a next gen GPU having:

2,400M-3,000M transistors


As mentioned above, RSX did not quite follow the same density. With 5 years we saw die size and transistor density improve, but net transistors go up 5x. This makes an 8x-10x increase questionable. But the GF3 to G70 do seem to follow the general trend of the silicone with a 10x+ improvement in the same number of process changes (3) and time (5 years):

2000 Q2 NV15 25M tran. -- 180nm
2001 Q1 NV20 57M tran. -- 150nm
2002 Q1 NV25 63M tran. -- 150nm
2003 Q2 NV30 125M tran. -- 150nm
2004 Q2 NV40 222M tran. -- 130nm
2005 Q3 NV47 300M tran. -- 110nm

In regards to frequency, PC GPUs have been marching upward in the 100MHz/yr in general, with periodic spikes and lows depending on architecture. This is a hard area to generalize, but I will guestimate a conservative 1-1.1GHz. Die size will rely on the method #1 data, although larger wafers may allow more aggressive die sizes/lower yields (although the uncertainty of future process reductions may keep things conservative).

Rough Estimates if GPUs continue on a linear path for chip constraints and transistor budgeting for current features and performance design:

* 1000-1100MHz
* 260mm^s die
* 2.4-3.0B transistors



* Architecture => I avoid architecture. We know GPUs will be more general purpose. We also know that things wont scale linearly. 64 ROPs at 1GHz would appear totally out of whack. This is hard to estimate. e.g. ALUs are fairly small so as GPU focus more on power and fewer features these could scale faster than the linear transistor increases; yet in the short term we will see ALUs become more robust with integer and bitwise support, which means they will be larger. As an interesting side exercise, adding 32 extra ALUs to the R580 array was only 63M transistors. Of course the framework was already present, so entirely new arrays would cost more in control logic and caches, etc. But adding 512 of these ALUs (which will be much smaller than future ALUs) would be about 1000M transistors.

In general I think GPUs will become more general purpose, and possibly with some consolidation of the ALUs, ROPs, and TMUs. Unified Shading architectures seem a no brainer, if not too "narrow". Unified Graphic Architecture seems much more likely.

eDRAM? We saw it in the PS2 and GCN. The Wii and Xbox 360 both use it as well. eDRAM offers immense bandwidth for a low cost. Some problems I see: Framebuffer-only memory may be too limited and tiling costs and AA (mere 4xMSAA!?) would be huge. If 1280x720 w/ 4x MSAA needs 28.1MB, then 1920x1080 w/ 4x MSAA needs 63.2MB. That would be ~500M transistors for 63MB of eDRAM. Yet do we want to be stuck with 4x MSAA in 5 years? Maybe we could tile and do 16xMSAA, but MSAA still does not work with all rendering techniques. Of course using our projections it would have substantial bandwidth (nearly 600GB/s). eDRAM also would be too small for textures. And with a move toward Global Illumination in realtime you need to be able to access the entire scene at any time for each pixel. A scene may have over 1GB of data, making eDRAM far too small for this. On the positives MRAM and ZRAM could reduce die area, and AA may become a shader-side task (Kirk seems to think so; edge-to-point rendering also seems to show how this can be done cheaply in shaders instead of oversampling). eDRAM will be a big question mark. The need for immense bandwidth is clear, but eDRAM is small. My guess is we will see it for specialize/isolated bandwidth needs maximize efficiency, remove bottlenecks, and make the specs more "realistic".

Of course one huge question: Will we still be rasterizing in 5-6years?

---

Memory.

PS1 (4MB) => PS2 (36MB) = 9x in 5 years
PS2 (36MB) => PS3 (512MB) = 14x in 6 years
N64 (4MB) => GCN (40MB) = 10x in 5 years
Xbox (64MB) => Xbox 360 (522MB) = 8x in 4 years

As costly as memory is, 4GB of memory (8x the Xbox 360/PS3) seems pretty assured, unless the focus moves towards bandwidth.

On bandwidth, due to the segmentation of memory pools and architectural designs (like eDRAM) it is hard to take a guess. But we do know this: Real-time global illumination requires not only fast processing, but extremely fast access to memory. Having large, slow memory pool(s) wont suffice. Problem: Even though we will see an 8x increase in memory footprint, it will be very difficult to get an 8x increase in system bandwidth. The Xbox had 6.4GB/s of bandwidth, the Xbox 360's UMA has 22.4GBs (enter: eDRAM...). MS would have needed to go with a 256bit bus to get 7x the bandwidth. The PS3, which does use a NV chip, is in the 50GB/s range. Will we see 400GB/s on an UMA? Xbox1 => PS3 jump says yes. Hmmm...

--

Wild Cards.

* DUAL GPUS. As mentioned above GPUs are moving to be more general purpose; CPUs are going Asymmetric, so instead of an extra large CPU with narrow focused co-processors (FPUs, PPUs, etc) why not invest some of the CPU transistor budget into 2 very large GPUs that work in tandem and can also chew through code offloaded from the CPUs. They may not be as efficient as dedicated PPUs, FPUs, etc. But as they improve in flexibility they should be serviceable, and more importantly more flexible. Having 2x the graphics horsepower would be nice, and being able to get better transistor utilization across the system would be a win. Maybe we could see a multi-core Intel/AMD CPU flanked by 2 GPUs?

* CELLS. Assuming an 8x increase in transistor density, we could be looking at 8 Cells in the 6GHz range. But why not dump the GPU and go with 16 Cells? Instead of ~3.5TFLOPs, you could have 7TFLOPs of general processing power. Sounds crazy, but if we see a shift where such a CELL architecture could do ray tracing and global illumination, why not? And if STI's dream of CELLs in everything comes true, it may be financially feasible to include even more. And in 5-6 years devs will know CELL well, have a large library of code that works well. It would be nice to have a large scratch pad, a better PPE (more cache, OOOe, etc). The good news: With CELL technology in hand, Sony can get a good preview of what CELL could really do. They will be able to test 16 CELL systems years before the console is due. If it works, they can push for it. If not... hello NV!

* MEMORY-CENTRIC. Tasks like GI require immense memory bandwidth. Sustainable bandwidth is very important. Unfortunately memory has been a fairly weak point in the industry, with processing power growing much faster than memory accessibility. Could we see a shift in the industry where a company like Samsung or Rambus are contracted as the central partner for a console? Yes, Rambus was involved with the N64, PS2, and PS3. But I am talking center stage.

* ASYMERTRIC DESIGNS. AMD has opened up their slots and wants to bring workstation dual socket MBs to the consumer. They also are looking at adding co-processors to their designs (like Clearspeed). We know Intel's roadmap forecast asymmetric cores, and CELL brings that today. This is not an odd idea (FPUs), but we may see more dedicate hardware in the future. Instead of trying to do GI with current hardware, maybe there will be a dedicate GI core?

* Sometimes we can figure out the big changes by analyzing where things fall short today: What are the current design bottlenecks?

- CPU/GPU bandwidth and interaction is neutered
- CPU tend to be unbalanced
- GPUs too narrow/fixed focused
- System bandwidth
- The goal, making good games, has been very complicated and expensive; will hardware turn the corner?

Maybe we will see some big changes in these areas.

* Nintendo will offer a $199 console with specs better than the Xbox 360 by 2010.

---

Now have fun making your own next-next-gen console GPU and system projections.
 
Last edited by a moderator:
Holy hell Acert, I didn't read most of that because it is 5:20am here, but I will def. tomorrow. Just wanted to say thanks for the analysis, looks fuckin long from where I am :D
 
pakotlar said:
Holy hell Acert, I didn't read most of that because it is 5:20am here, but I will def. tomorrow. Just wanted to say thanks for the analysis, looks fuckin long from where I am :D

Kind of short by my standards :LOL:

Actually just some random notes I have been peicing together for over a months. Totally goofy for the most part... but I like to have fun and since we know what is in these new consoles finally, why not take a stab at the consoles of 2011? Anyhow, read it as a "fun" post. It is nothing more... and have fun with it.

You guys are smarter than me, so I want to see some real guesses by the smarty pantseseses in the class :p Mine was only an attempt to make you smake your head with, "What a silly idea! Here is how it will be!" Seems bad posts get the most replies these days!
 
I'm a little unsure on the RAM. PC RAM upgrades seem to have really slowed down. 1 GB has been acceptable/the standard for like..3 years now it seems. It's seems only very recently is 2GB becoming at all desirable, and it still doesn't make too much difference in gaming.

In order for 4GB to be standard on next gen consoles, since they're historically low on RAM compared to PC's, you'd need average PC's of the time to sport 8-16 GB's of RAM (2-4X the consoles) by 2010-11. I dont see that happening.

I dont know but, 4 GIGA RAM sounds hella expensive for next gen too me..
 
That is a lot of RAM. Content creation for that would be insanely immense! Can you imagine 1,000 4096x4096 textures per level?!
 
Dual GPU's: Somebody once already posted a few months ago about favoring dual GPU's on a console and was totally shot down as a crackpot. I wonder why the change to it's now an acceptable idea in so short a time? I guess that poster was right.
 
sonyps35 said:
I'm a little unsure on the RAM. PC RAM upgrades seem to have really slowed down. 1 GB has been acceptable/the standard for like..3 years now it seems. It's seems only very recently is 2GB becoming at all desirable, and it still doesn't make too much difference in gaming.

In order for 4GB to be standard on next gen consoles, since they're historically low on RAM compared to PC's, you'd need average PC's of the time to sport 8-16 GB's of RAM (2-4X the consoles) by 2010-11. I dont see that happening.

I could see 2GB of fast memory being the standard next time around, but consider these factors on the PC and why the parallel may have some holes:

* PC CPUs hit a performance wall in 2002-2003. You don't need significant memory footprint or thoroughput improvements if your overall performance is not increasing and memory is not the bottleneck. In PC games we have not seen significant jumps in physics, AI, or other demanding tasks because of this, and also because of some of the weak points on the PC (poor float performance, the modular nature means less than ideal component communication)

* Windows, which most PCs run, has issues with anything above 2GB.

* PC memory needs to account for the GPU, with flagship GPUs in the 512MB range. A typical gaming PC can have between 1.5-2.5GB of memory, significantly more than the consoles. A PC with 512MB of total memory cannot even play most new games.

* The PC memory market has been stagnant due to lack of competition by Rambus/Intel miscalculations and DDR manufacturer price controls.

* More memory is useless if it is too slow. Until recently PCs had stalled at 6.4GB/s of system bandwidth for a number of years, and before that 3.2GB/s or less. 3.2GB/s with 256MB is a better bandwidth:size ratio than 6.4GB/s with 2GB (12.5:1 vs 3.125:1). Basically space has significantly outpaced bandwidth.

I dont know but, 4 GIGA RAM sounds hella expensive for next gen too me..

Memory makers benefit from process shrinks as well. If transistor density is going to improve 8 fold in the next 5-6 years that means for the same amount of die space they could feasibly get 8x as much memory. 512 x 8 = 4GB. This pattern has held true for consoles for a couple generations.

But you are right, if the tradeoff is poor, we may see less memory that is faster and more spent on other areas of the console. But we also may see technologies like MRAM take hold where we get even more density, less power consumption, and faster chips. If something like that happened we may see a shift toward more memory-focused designs.

Memory is expensive every generation. Adding an extra 256MB to the Xbox 360 is said to have had an estimated cost of $900M. I think we forget how expensive it is. We pay $40-$50 for 512MB of DDR400. The Xbox 360 has 512MB of GDDR3 which is 3.5x faster--in a $299 box.
 
Acert, i know everyone already told you LOADS of times, but you really have way too much time in your hands. :D

What worries me about the next gen (or the next next gen) is that i fear there could be a return to the retro. Kind of like the Wii won't have the most amazing audio/video hardware (by far), i fear there could be a surge for gameplay-focused gaming (which should be the case anyway!) instead of Sony and MS's push for bigger and better graphics all the bloody time!

I guess we'll see what happens when the Wii and the PS3 are released. If Wii does as well as i think it will, i really do believe that FINALLY these manufacturers will start focussing on gameplay instead of how many coloured pixels their machines can push.

That's my take anyway...
 
Gameplay is boring. Give me more colored pixels :oops:

Only half joking. I like gaming for the technology and following the business more than actually PLAYING for a long time now.

I guess that's ok since I'm a older gamer..
 
london-boy said:
If Wii does as well as i think it will

I think Nintendo has found a niche where they not only co-exist as a 3rd console, but also excell. Hopefully you are right that next gen does see a shift toward gaming innovation, interaction, and intuition--but hopefully not at the expense of the hardware. As much as things may change, I think the core will be there. Especially with huge screen TVs dropping in price, many gamers will be experiencing HD on 40" + TVs for the first time next gen.

In general I think Sony already has the PS4 plotted. The CPU will be a new CELL. Nintendo will create the Xbox 2.5 with IBM/ATI and do some new doo-dad. Part of me hopes for high quality HMD+Headtracking, but I think enough people hate that idea it won't happen.

Maybe Wii-Wii (Wii 2) will have 2 Wii-motes :cool: Or a body suit...
 
sonyps35 said:
Gameplay is boring. Give me more colored pixels :oops:

Only half joking. I like gaming for the technology and following the business more than actually PLAYING for a long time now.

I guess that's ok since I'm a older gamer..

Aren't you bored?

If someone can takle the Formula for Fun (FFF(TM)), i'm sure you - and all the other geeks who only play to look at the pretty pixels - will jump at the chance and finally smile while you play a game, instead of analysing every frame.

I think it's all about the difference between how we used to play when we were kids (when playing any kind of game was just SO much fun!) and how we live our lives now that we're older, which i'm sure most of us will agree is not as happy as it was before, for a number of reasons. Gaming shouldn't be about analysing and criticising, it should be about fun.

In a prefect world, we would get both (fun and top technology), but so far it seems that you either get one or the other. Almost as if one company only has so many resources, and has to choose where those resources go: either gameplay or technology. Only a very very very small number of exceptions apply obviously.
 
Gameplay is mostly in the software though, and you can have fun games on high-tech hardware. The limiting factor appears to be publishers and/or developers (mostly the former I think). A new gaming interface may add new fun opportunities but isn't a neccessity.

The aspect about Wii which adds more 'fun' is the controller, but I don't see much room for improvement next-next-gen, nor necessity for improvements per se. Wii is aiming to get more fun to a wider audience, attracting traditionally non-games, but there's an established user base that wants more fun with more complex games too, and only more powerful hardware can provide those improvements. Now the inclusion of a Wiimote might spur devs to come up with more intersting games in a way, say, the EyeToy didn't get utilitised, but future fun can't be dependent on a new Whikkistick controller. If devs/publishers are incapable of inventing fun games with existing control schemes, and need the hardware guys to produce new widgets to change the software landscape, the industry deserves to die IMO.
 
I can somewhat see the 4GB RAM happening. Since an SLI/crossfire Rig (not even top end) can have a total of 1GB of memory. I personally think it will go the multi-GPU chip route, initially, then shrink it into one chip when it becomes available.
 
Last edited by a moderator:
As an adult you obviously don't find the same things fun anymore as when you were a kid and more things were new and fun to you.
You experimented and discovered new things, and while you might have not been aware of it then, that was what made everything more fun to you.

Getting a highscore in a game, or achieving a certain task just isn't as much fun to an adult as it is to a child, because you as an adult know it really doesn't matter much, and that there really isn't any reward for achieving in a game.

For a kid, playing the game can as itself be a reward and a discovery in itself, while you as an adult expect more for the time you invest and you recognise the themes and mechanics as what they are; variations of what you've experienced before. A child can do the same thing over and over again and find it fun, because he's only just "investing" in his experiences bank, an adult is already "spending" his experience reserves and reflecting everything through it.

Entertainment that crosses the borders of adult-kid entertainment are those that have different "layers" for the "target groups", like many "family movies" like "Shrek" where the story and characters are kid friendly, but at the same time has "layers" for adults in form of good jokes and technical wizardry,
videogames that are simple enough for kids and are themed "kiddy", but at the same time have a "layer" for adults that either offers some novel gamplay mechanic and/or technical wizardry.
In these crossgeneration entertainment, the adult "layer" is in almost all cases much shorter lived than the "child" layer. A kid can watch the Shrek movie countless times without getting bored, while an adult watches it as many times as there are adult "layers", like in Shrek maybe 2 or 3 times, once for the "jokes" layer and second for the "wow that's tech wiz!!" layer, and third time for the "being a good parent and spending quality time with the kid" layer.

Edit: The office coffee is some good shiit.
 
Last edited by a moderator:
At least with Xbox360 and PS3, a major driver for the spec upgrades was the jump from SD to HD resolutions. In the next gen, there won't be another jump in resolution. So I wonder if we'll see a similar approach from Sony and MS like Nintendo, a console that's only 3-5x more powerful then the previous gen, but more efficient and cost optimized.
 
McHuj said:
At least with Xbox360 and PS3, a major driver for the spec upgrades was the jump from SD to HD resolutions. In the next gen, there won't be another jump in resolution.

Of course there will. I fully expect the next next gen to target 1080p as standard. The standard for 360 and PS3 will be 720p, except some exceptions on PS3. 1080p has around double the amount of pixels as 720p, which has around double the amount of pixels as 480p, so in terms of performance hit, it's all the same jump...

Besides, what they can do is offer much better AA, which will help a lot.
And i'm sure there will be silly high resolutions "supported", mainly PC resolutions, just as a checkpoint feature for the few geeks who will plug their X720 or PS4 to super high res monitors.
 
Personally I think that it will be more like wii, a half generation more performance (eg R600 +PPU+Intia+ =OOO CPU), made for easy development etc... specially to offload for Sw soluctions so devs can keep up innovations without having cost too high and put lower price on the console. Plus the addition of new controls , multimedia, free online (big work in there) and "mass" data storage.
 
At least with Xbox360 and PS3, a major driver for the spec upgrades was the jump from SD to HD resolutions. In the next gen, there won't be another jump in resolution.

Huh? No, it was better graphics that was the major driver, same as for the ten-fold spec increases every prior generation that didn't see a res jump..

The move to 720P alone soaking up so much power in fact dulled a lot of the impact of the spec increase, I believe it is one reason why their has been so much criticism of the 360 as "not next gen enough" or "it looks like the Xbox game in higher res". Previous consoles did not have to deal with increasing resolution, so ALL their power just went into better looking games, and you didn't hear "it looks just like previous gen".

However, the "downgrade" in what could be done versus what could be done at 640X480 was worth it, because it was just too hideous to play at 640X480 any longer. The image qaulity jump in moving to 720P was so great that it looks overall better to play a game with lesser graphics at 720P. I know it has been the case to me and my brother for some time that we could no longer stand playing our consoles on SDTV. We both had become so accustomed to the hi res of the PC world (either in playing games or surfing the web) that we just could barely stomach going back to a PS2 or Xbox game on SDTV anymore.

The jump to 1080P I believe is much less a sure thing. You reach a point of diminishing returns. In some cases it's perfectly valid when you have various resolution options, to turn down the res to get a higher frame rate and better graphics. This is why people play PC games at all sorts of resolutions, rather than just setting it the higher the better all the time. I dont think that dynamic will go away next gen. The essential tradeoff will be, better graphics, lower res or worse graphics, higher res. That wont go away, but it'll be much less compelling to move to 1080P than to move to 720P, because unlike 640X480, 720P is not a horrible experience.

Plus in five years, I do not see 1080P settling in as the standard. It's going to be hard enough getting 720P settled in in five years (considering a minority of homes have HDTV's at all).
 
Status
Not open for further replies.
Back
Top