One dev's view: X2 a disappointment

Status
Not open for further replies.
Here is a answer from Teamxbox.com forum ,another story of Xbox2
tech from another user:

"Originally Posted by EPe9686518
No one it will actually look better as they still have not been able to turn on some of the effects due to not having a machine powerful enough to run them.

Anyway this guy is clearly bias twards Xbox 2 and really has no clue what he is talking about. The fact that this guy stated that "no developers are shooting for 60 fps " on a system that has basicly 3 CPUs and can do 6 processing threads at once is simply insane. They can dedacate one whole CPU core to helping the GPU with geometry and still have 2 full CPU cores left for Physics, AI and frame rate.

As some of you know I work for a gaming web site, I talk with developers a good bit and have listened to what they had to say about Xbox 2. Every single one I have spoke with has been nothing less but extremely impressed and amazed at the system. From the fully developed tool sets to the architechture it self, developers have been extremely happy with what MS had put togeather for Xbox 2. This guy who made this post is about to shoot him self in the foot big time as he is dead wrong on many things and completely lose the small amount of credablity he had to start with.

When you guys see Xbox 2 at E3 this year you will be blown away. The jump in graphics is not going to be the same from PSX to PS2. It's going to be closer to the jump made from software rendering to voodoo 2. I am not being the least bit bias when I say people will littlerly not belive what they are seeing in terms of graphics and gameplay. The games look so good that there would be a ton of post on the fourms when you will finally get to see them debating reather they are real time or pre rendered CG movies. I say "would be" because you will see these be played in real time and that will end any debates about that.

We should see PS3 first before we see Xbox 2. But expected to be blown away when you finally get to see Xbox 2 as they have done a great job on the hardware and got some damn impressive games on tap."

What is the true????

http://forum.teamxbox.com/showthread.php?t=328909&page=7&pp=15
 
Lazy8 said:
I don't know; it seems like a pretty tight fit. Seems like if you want a nicely textured game with rendering precision and good display, one desire puts a lot of pressure on the other desire, like what happens in PS2 Soul Calibur 2 and other titles. In the 32-bit depth buffer mode, the game displays only in interlaced, and it drops to 16-bit depth when the front buffer has to be full height for proscan.
Front buffer doesn't affect rendering precision though - in pro scan SC2 just runs like pretty much every DC game did.

Of course, the MIP-mapping isn't a problem of the eDRAM size.
Which is rather the point, - mipmapping tends to be where 99% of the complaints today about PS2 "IQ" come from.

And ironically, the early titles that had interlacing problems "because of eDram limitations" were also using maybe 1/10th(if that) of texture budgets that we have in some titles today, and they run in progressive scan to boot, or sometimes with FSAA.

nAo said:
but I do know it's quite common and easy to upload 2/3 mb of compressed (4 and 8 bits clut textures) textures per frame in a small (5-700 kb, double buffered ) memory pool on the edram.
Which coincidentially is also quite enough to get nice texturing using nothing but transient textures - if you're really pressed for eDram space.
 
As it looks increasingly safe to say that both PS3 and Xbox 2 will have around 25Gb/s shared (CPU/GPU) memory bandwidth, I have to ask how much of a limitation, if at all, this is. I remember someone saying that PC videocards already have around that memory bandwidth just for the GPU, so despite being on market later, new consoles will not even match that.

This time around, there's not even low res excuse available, as most reports indicate that devs will make at least 720p games as standard.

Is this especially a problem for the Cell CPU due to it's high flops rating?
 
Fafalada:
Front buffer doesn't affect rendering precision though
Not directly of course, but it caused the compromise when too much space was required apparently.
in pro scan SC2 just runs like pretty much every DC game did.
Not really, because PS2 loses its IMR-remedy for precision blending in that mode while a 16-bit DC buffer never truncates/dithers beyond that.
Which is rather the point, - mipmapping tends to be where 99% of the complaints today about PS2 "IQ" come from.
Many of the complaints for sure, but banding and/or graininess can also be bothersome with the amount of multi-pass alpha blending it can do. Not to mention a lack of full-height rendering in even some newer titles like DMC3 and loss of good proscan potential for videophiles.
 
marconelly! said:
As it looks increasingly safe to say that both PS3 and Xbox 2 will have around 25Gb/s shared (CPU/GPU) memory bandwidth, I have to ask how much of a limitation, if at all, this is. I remember someone saying that PC videocards already have around that memory bandwidth just for the GPU, so despite being on market later, new consoles will not even match that.

This time around, there's not even low res excuse available, as most reports indicate that devs will make at least 720p games as standard.

Is this especially a problem for the Cell CPU due to it's high flops rating?


OK, but between PS2 and PS3 there lie whole 6 years, should run there first games even with 30 Frames/s, then this would be quite weak. Why did they orientate in Arcademachines ,for the graphics they use a
seperate graphicbus and memory, why always UMA???? The argument "costs" is not a argument for me ,in view of the billion profits Sony with the PS1 and PS2 has done, and the big financial potential of MS.
 
darkblu said:
btw, aaron, regardless of how cheap it is, i don't think a copy-on-write will do for a double-buffering scheme - your back buffer is not complete until the very last write to it.

Actually a proper copy-on-write scheme would utilize a single bit per pixel as a color/marker. It the current marker is not equal to the marker stored, you copy the stored pixel to frame buffer before writing it. The current marker changes on any frame/buffer switch. Cheap, easy, simple.

Aaron Spink
speaking for myself inc.
 
Lazy8 said:
Not really, because PS2 loses its IMR-remedy for precision blending in that mode while a 16-bit DC buffer never truncates/dithers beyond that.
That was my point - SC2 rendering happens in backbuffer, which still uses full color precision, just like in DCs case - the results are only dithered into 16bit after rendering has been completed.

Many of the complaints for sure, but banding and/or graininess can also be bothersome with the amount of multi-pass alpha blending it can do.
Yes, but that's an issue just for that rare few games that run 16bit backbuffers. For the reason you mentioned(fast multipass), using 16bit backbuffer just doesn't make much sense on a PS2 title, you just loose too much in terms of what you can do rendering wise, not just color precision.
 
aaronspink said:
Actually a proper copy-on-write scheme would utilize a single bit per pixel as a color/marker. It the current marker is not equal to the marker stored, you copy the stored pixel to frame buffer before writing it. The current marker changes on any frame/buffer switch. Cheap, easy, simple.
One bit won't cut it - that way render to texture operations would be all but impossible to make work (which also need a write back to main memory). Anyway why do we even need extra hw, just blit the buffers you're finished with to main memory, it works for GameCube :p
 
I'm posting to back up cpiasminc as well, similarly to Jaws, who I also converse with from time to time on the psinext forum.

First of all I find it amazing that his posts could garner so much attention to begin with - there are some interesting points made, and certainly they are much appreciated on a console-centric forum like psinext, but c'mon people - he freely states when he is making a guess at something. I will add that nine times out of ten I have seen him be correct in his hypothesis.

As for the Cell prototype in 2004, two things.

One, he was by and large correct in his claims. Secondly if anyone saw the threads play out, at least on other forums to which I belong, he stated himself that the prototype he witnessed was probably not the saem chip going into PS3. Now it seems as if it might, but regardless, he has never shown himself to be someone that would make anything up or outright lie about something like being invited to see the Cell. And you CAN get a sense for people through a medium like the Internet, as I'm sure most of you know.

I'm just writing this to seperate the reputation of cpiasminc on the Internet from similar firestorms that errupt around things like 'leaked' (fake) NV50 and R520 specs.
 
aaronspink said:
darkblu said:
btw, aaron, regardless of how cheap it is, i don't think a copy-on-write will do for a double-buffering scheme - your back buffer is not complete until the very last write to it.

Actually a proper copy-on-write scheme would utilize a single bit per pixel as a color/marker. It the current marker is not equal to the marker stored, you copy the stored pixel to frame buffer before writing it. The current marker changes on any frame/buffer switch. Cheap, easy, simple.

while that would save erroneous updates on overwrites, i still don't see how it will solve the frame coherency issue with a copy-on-write & double buffering - how do you guarantee that the pixels in the front buffer all belong to frame N for the duration of said frame?
 
- the comments that creating cross-platform games "will likely be damn-near impossible" for the next gen. are interesting... IGN made similar comments,

That developer and the IGN article you quoted are clearly talking about very different things. The developer was speaking of coding differences, PS3 possibly being very difficult to program for.
 
darkblu said:
how do you guarantee that the pixels in the front buffer all belong to frame N for the duration of said frame?
Well the way I read it he wants to flip the marker bit every frame, so double buffering would work, but multiple render targets used as textures obviously wouldn't (at least not without taking major care in how you handled them, certainly not what I'd call simple OR easy).
 
Fafalada said:
darkblu said:
how do you guarantee that the pixels in the front buffer all belong to frame N for the duration of said frame?
Well the way I read it he wants to flip the marker bit every frame, so double buffering would work, but multiple render targets used as textures obviously wouldn't (at least not without taking major care in how you handled them, certainly not what I'd call simple OR easy).

i must be in some dumb state today as i still don't see how aaron's copy-on-write scheme provides that in every single moment of the duration of frame N the front buffer contains only pixels of that frame. the only way that i see this happening under that scheme is to start with a global back-buffer update (e.g. a full-back-buffer clear of some sort). which, aside from being counter-productive, does not gain you anything compared to a global blit at the start of the frame, IMHO.
 
DemoCoder said:
Well, MS could offer frame-capture or a scenegraph API and do the workload split underneath, but those have failed in the past for various reasons. Remember 3dfx TBR drivers? Remember retained-mode DX?
If you know you have exactly one hardware target and you can change the API as you see fit, I'll bet MS could get around it. 3dfx's TBR drivers were problematic because they were trying to implement TBR when the developers and the APIs were expecting immediate Z buffer (that's at least partially the issue). If the developers and the API were expecting TBR, I bet the drivers would have been fine.

Again, I think you underestimate MS's ability and dedication to developers. It would be a black mark on their record if they're trying to sell XNA but don't do a thing about this issue.
 
Well yeah, I made the assumption you fill the buffer at first in some fashion(I guess it shows what platform I've been working on for too long), otherwise I figure you'd be seeing mighty interesting things on the front buffer :LOL:
I guess that's just more reasons this wouldn't be a good idea.

Anyway, the only thing I worry about with blitting is whether it stalls the GPU when it happens - it's not the biggest deal ever for one back-to-front flip, but if you want a lot of render to texture it could get expensive very fast.
 
Couldn't a modification of aaronspinks idea work for buffer overflows? First assume that compressed 8x8 pixel tiles are being stored in eDRAM, and that you have enough space for most scenes at 720p, 4xFSAA, HDR color, 24/8 z/stencil buffers. You also have a SRAM cache which keeps track of whether a tile is stored in eDRAM or main memory, with all tiles starting off in eDRAM. Such a cache (1 bit per tile), would be 8kbyte for a 2048*2048 render target. Each time an 8x8 pixel tile is processed, it is fetched from eDRAM or main-memory, decompressed, updated with newly computed color/z results, compressed again, and finally written back to eDRAM (if it fits within some memory budget for an eDRAM tile) or main memory (if it exceeds the budget). IMO you would probably want to arrange things so that Z/stencil info would never leave eDRAM, so a slightly more complicated decision making process might be necessary.

Anyway, the effect would be that only tiles significantly worse than the average case (in terms of compressibility) would be fetched and stored to main memory - most of the buffer would stay in eDRAM all the time.
 
this thread is FUD filled... I get the sense that cpiasmnic just posted hearsay... I am no whiz at this graphics capability but I remember J Allard talking up procedural graphics techniques like procedural geometry generation, and procedural texturing in room/world creation on Xenon.

http://arstechnica.com/news.ars/post/20050106-4506.html?6851

Maybe cpiasmnic is correct but Xenon may really rely heavily on procedural techniques to generate graphics instead of using SPE related power to produce geometry.

I think Xenon was designed to rely on those techniques in order to compete with the Multi-SPE power of PS3 by virtue of this patent:

link so it doesn't mess up the spacing of the page

I think the Edram is there just for this use.
 
I will cut to my conclusion now: After going paragraph by paragraph through this post, I see (1) nothing new information wise (2) and a lot of his opinion that is very subjective.

There is nothing new here. There is no new technical data and his "statements" are about as subjective as the speculation we have here on a daily basis. I am not impressed... with that said... my breakdown:

Almost everything said here dovetails with the leak. So the technical discussion is nothing new; the only "new" stuff is his interpretation of the architecture. I am not overly impressed because people made similar statements about CELL based on patents and such. And a lot of people were predicting CELL to hit certain milestones and were dead on--yet were also just as wrong in many other areas (e.g. some were right on 1PE:8SPE @ 4GHz = 256GFLOPs, yet were dead set it would have other features like eDRAM) so you have to take some things with a grain of salt. Just because someone makes a claim and it comes true does not mean they are "in the know", especially when they agree with previously "leaked" info.

The main value to his post is the interpretation of the architecture. So here are some questions:

The fact that they choose to centralize their FSB or share a single L2 cache among 3 processors shows some real lack of insight. The biggest flub would have to be that 10 MB eDRAM on the GPU -- which I'm told is really MS's idea (both MS and ATI told me that much) -- that just says they didn't even think about resolution.

1. L2 cache is in the leak/patents I believe. Is the shared L2 cache bad? Some here have already disagreed. Obviously he has a right to an opinion, but is this an opinion or fact? Another relevant question: This is a gaming console with a VERY rigid design structure and not a PC. Does the weakness of a shared L2 cache downplayed or exagerrated within such an environment?

IBM has shared cache on some of their dual core chips as will Intels new chips (do not shoot me, I got that off a google search), while it looks like AMD is going with independant caches. I have read blurbs saying either way is better. BUT, is it reasonable for a console to go the route AMD? Intel/IBM think a shared cache is ok at this point--that says something. Also, desktop chips are pretty expensive, so I am wondering if this is a case of, "In an ideal world we get feature X" but reality tells us a $300 console cannot have every 'key' technical feature on the horizon. Compromises have to made somewhere along the road... the question is how significant this compromise is. The fact IBM/Intel chips (will) have shared cache and others here have said they do not see this as a problem makes me thing this is an opinion.

My take: Old technical facts; His opinion.

2. 10MB of eDRAM is in the leak. From this discussions here is seems very clear 10.5MB of eDRAM would be enough for 720p and there being very feasible workarounds if this is a limitation (and it may not if the chip is 32FP as Dave indicated may be a possibility). Again, perfect world says we have unlimited die space and transistor count, and heat/speed/yields are not an issue. Real life tells us compromises and reality sets in at some point for a $300 console.

My take: Old technical facts; His opinion.

Hardware-wise Xbox2 is getting disappointing the more I look at it... and I know I shouldn't really be saying that since I'm actually developing on it.

The "tone" message of his post. He is disappointed, for whatever reason, and most of his points relate to this.

As for his comment, people here are said to be be working on X2 and VERY excited. So who do you believe? Both can be true--but both are opinions. Based on the pretty open discussion (and accuracy of a certain number of people here) I would have to say I put more weight on what people here say who are working on the system.

My take: His opinion.

let's just say it's Moore's Law looking perfectly normal and healthy.
Pure silliness. Since when have $300 consoles broke Moore's Law? Even past consoles have "wowed" at their demo, but a year LATER when they actually release are well within the realm of what is standard (and this happens all the time... Intel will demo a chip on 'X' process and a year later release a mainstream product).

Three other points: (1) X2 will probably be using the 90nm process. With a 2005 launch there is no other option--that is as good as it gets. (2) Consoles are fixed designs. The ten(s) of millions of X2s that will be shipped will all be the same, therefore every piece of software can exploint the strengths of the system unlike a PC or Mac. (3) The system is defying Moore's law in that it has 3 CPUs. Well, maybe not defying it but if you look at the condition of CPU processing the last 2-3 years you will see that stuffing 3 CPUs into a system averts the problem chip makers are having with heat and yields and the ability to shrink. We should have 8GHz P4s right now... the fact X2 will have multiple CPUs shows some foresight (both in design limitations AND the fact it will help push multithreaded games onto the PC).

My take: His opinion, and quite a silly opinion at that.

But if you think of the difference between PS1 and PS2, you should see about the same growth from Xbox to Xbox2, but at the same time, taking into account the difference in resolution, content, shader complexity and everything else put together.

I disagree. The PS was one of the first 3D consoles. Looking back the PS was BRUTAL. Fixed function and very limited features. Definately a first stab at making basic 3D. The PS and N64 made some basic mistakes (e.g. N64 had a 4k texture cache size!) but how were they to know how these limitations would play out? The PS2 got 5 years of SEEING how the market and technology would develop and get feedback on weaknesses on the current design. This was also coupled with the breakneck pace of process shrinks.

As we all know it is getting harder and harder to shrink chips (there is conjecture that in the 2010/11 timeframe consoles will ship at the 45nm process which would only increase the transistor count about 4-fold). And the fact is chip makers have a better handle on what works, and what does not work, in 3D rendering. There are less and less weakspots and design mistakes so the change will be more evolutionary than revolutionary.

And he makes an important comment at the end about resolution, content, shader complexity, etc...

1. PS1/PS2 ran at the same resolution. PS3 will support HDTV. That alone requires more power.
2. We expect much bigger worlds, with a plethora of objects, with more dynamic/interactive content in the next gen. That requires more power and memory.
3. PS1/PS2 had fairly fixed function features. To get the great rendering effects we expect today Shaders come into play--but the better looking the game, the more shaders required. This is a feature that actually will help cover up the lack of progress when comparing PS2=>PS3 to PS1=>PS2. Designers get to choose how to tailor the hardware to their game design more than in the past.

So adding the limitations the market has, and then change the Resolution format and a demand for much larger interactive/realistic worlds we can see that we are not asking for just new car/shooter/fighting games with prettier graphics, we are asking for a new level of realism that was not required between the first and second gen of 3D consoles. So comparisons are only skin level in many ways.

My take: His opinion.

The thing is that SIMD is very important to getting any major performance out of PPC processors these days. Without it, they're basically Celerons. So avoiding pipeline stalls and concerning yourself with *instruction latency* is going to be huge on all 3 consoles with this upcoming generation. In some ways, that actually means we've gone back to the '60s in terms of programming. It's just that it's the 60s with 3 million line codebases

As he notes, this is not only a problem the X2 faces. So his "disappointment" in this area is not an X2-only thing, but a general one with all three consoles. Which begs the question: WHY is he so disappointed with this? There really is no other reasonable option at this point. These are $300 boxes that have to cut corners.

My take: Welcome to console reality for the next 5-6 years. Try to avoid the hard bump on your way in.

I should also note that based on what I'm hearing from every studio I've been to, I'd have to say that, at least for the first generation of games on next-gen consoles, you will not see anything running at 60 fps. There is not one studio I've talked to who hasn't said they're shooting only for 30 fps. Some have even said that for next-gen, they won't shoot higher than 30 fps ever again.

Please note: He did not say X2, he says NEXT GEN.

So I must ask: Is there some technical limiation preventing games from hitting 60fps?! The only thing I can think of is 1080i/p limitations. But lets get real: The first couple years people will be running these games on 480i/p while the games WILL support 720p and maybe more. So if there are GPU pixel-rendering limitations and 720p runs @ 30fps, 480i/p users will get 60fps. Second, I think most of us expect the first 2 software gen releases to be more rehashes from the PC or current console games. These most likely WONT tank the CPUs.

I fail to see how what he says is true in general. He does not state WHO, or HOW MANY, devs he talks to, so the point is mute. I hope someone remembers this statement when we see the first PS3/X2/Rev software. If a healthy percentage (lets say 35%) are not running at a solid 60fps @ 480i/p I will be shocked. What is the point of better graphics if they are choppy? There will always be developers who everestimate what they can get out of a system (or get development time cut short or just aim more for pretty still shots versus smooth framerates) and get choppy games--that will always be the case. But I can hardly believe, based on what we know about the X2 and PS3, that we can believe that 1st gen next-gen console games will not be running at 60fps.

If anything it tells us a little bit about the developers he claims to know.

My take: I will believe it when I see it. Until then, and based on the impression of other developers who are impressed with next gen power, this is pure speculation without any evidence and based on his personal experience which is ALWAYS a bad borameter of any objective issue.

As for PS3... well, it looks as though PS3 will be the hardware king this time around. Just as Xbox had the powerful hardware in current-gen.

Wow, PS3 will be the HW king? What breaking news :rolleyes: All I think I should add to this is that Sony and MS have really different design philosophies this generation. I think each set of hardware will cater to different developers, game genres, and budgets. I think we are at a juncture in HW development that we can no longer say "X is better/more powerful than Y" because those statements are founded in a very fixed environment. When developers all made 2D side scrollers it was fair to talk about "Who can do more spirits and has more color depth". Today we are looking at development teams ranging from 20 people to hundreds of people; development budgets from less than a million to close to 50M. We are looking at games with very small landscapes with fixed gameplay to games with sweeping dynamic worlds. And NONE of these are better than the other when guaranteeing a FUN game that sells well. Not every big budget game outsells/outperforms a small budget game. And this does not even begin to touch on the subject of development tools, libraries, and ease of use--power is more than throwing muscle at a situation, but intelligably making the task quicker and easier.

So when we talk about the HW king, it is important to consider what we are saying. From a technical # standpoint I think there is no problem saying PS3 will be the king.

My question is: Which of the 3 consoles will allow the most developers to get the most out of their games. Whatever console thats hardware allows the most quality games is the most powerful IMO. Other people will look at it, "The console with the single best looking game is the most powerful". And others will say, "The console that has the most quality games, regardless of developer issues (i.e. large install base=more developers=more games=more quality games despite HW issues) is the most powerful". Again, we are talking about a very subjective subject. I am not sure we can say anymore, with any amount of definitiveness that one console is head and shoulders above the rest--at least not at this point.

Maybe a year after their release we can start drawing these conslusions, BUT until we can see what developers can do with the HW, and what that HW will be, a lot of questions remain.

My take: We already knew the PS3, from a technical standpoint, would be the most "powerful". Nothing new here.

PS3's will probably have some features that Xbox2's doesn't and vice versa.

Wait, I thought he was on the inside? If he is a developer he should know what the X2 is, and with PS3 to be demonstrated in a month he should have a pretty good grasp on what the 18+ month (as claimed) PS3 GPU project should hold. If I was "in the know" I think I could be a bit more firm than "probably". Heck, people here under NDA say more than that!!

My take: His self proclaimed synopsis of his post: Conjecture just like everything here on B3D!!

Of course, not everything he said was negative, but we already knew this stuff:

Microsoft definitely makes great developer tools and documentation, and it would be silly to think that XNA will not amount to much. -- There was a recent "speculation" thread that pretty much dumped on XNA based on conjecture (i.e. people who are not working with it), but the quotes from developers who have access to the tools have been impressed. That aside, in general MS is a software company and of course they will leverage their forté and this should help developers.

"In that sense, [X2] will probably be easier to develop for." -- A very overlooked fact. To get similar performance % out of a more powerful chip requires more time, more money, and/or easier development.

Overall, I found not a SINGLE item not found in a patent or the "leak". I am not sure this is even news worthy--this is like someone linking to a Vince, Pana, Dean, nAo, Faf, etc... (I would say Dave but all he would say is "Yes"!) post here giving their summy of some hardware based on the info we know. Actually, I find each of the people I listed above to have a pretty good handle on the HW specs from the leaks and able to give some pretty insightful geedback

Beyond that a lot of what he says is his OPINION, and many times that opinion flies in the face of general sentiment here. OK, he is not impressed. But others are. So who to believe? I have a hard time believing some of his claims being as universal as some people want to imply. e.g. He may be telling the truth about the 30fps based on who he talked to, but I high doubt this will apply to the majority of developers. And I just want to point out again he did not limit this to the X2, so his comments are just as valid for the PS3. I just cannot fathom X2 and PS3 titles, with the CPU power they have AND the GPUs they have, being limited to 30fps. He may be telling the truth based on who he talked to, BUT I refuse to believe that this is something that will apply industry wide. If in fall of 2006 70% of games are running at 30fps I will apologize, but until then what he says makes no sense to me.

But all of this doesn't matter--I will believe the software. If the X2 is a disappointment as he claims the software will bare that out. Until then he speculation is just his opinion on what appears to be already public info.
 
I basically took his post to mean. -We have no idea how to optimize our games for the Xenon, yet. Basic frustration. The 30fps comment is one of the stupidest things I've ever heard taken seriously from a forum post.

Launch/1st gen games always are crappy with 1 or 2 that will be the standouts. None of the first batch pushes hardware very often. Every console in the last 20 years had this problem. These next batch of consoles are using very exotic designs similar to Saturn or Jaguar. So many choices of keeping all the pipes running full on the Xenon. 3 cores, shared L2 cache, Edram, CPUs able to help the GPU out in a pinch, bridges connecting things that have never been connected before at high throughput. Then theres great 10MB edram debate (which has to be used as a framebuffer :? ). I still don't really understand how my x800xt and xbox are pumping out 1600x1200x32bit 4x fsaa and 720x480x32bit with 0 edram. Must be magic.

Creativity based developers with ca$h like Rare, Teamninja, some capcom teams, and Retro are really gonna shine nextgen. Corporate based developers that have been milking middleware for 4 years are gonna have some growing pains at first.

It seems to me early optimization will be key for both ms and sony. If MS can get the Xenon producing great looking stuf and sony comes to market with a more powerful system but not producing stuff that looks significantly better well there gonna have a tuff sell against a larger install base. As always its gonna come down to software and who has the "killer apps".

Its an exciting time.
 
Status
Not open for further replies.
Back
Top