Indeed and the nice thing about having a bunch of SPU with a good task system, is its entirely possible to mix both types of programs (as a GPU enhancer and for gameplay system).
For example an area where SPU can benefit RSX a fair bit, is as Dean talks about as a geometry modifier (trimming small triangles, doing progressive meshing, HO tesselation, etc.). You can do this on either part of frame across all SPUs, or dedicate 1 or 2 SPU to the job and leave the others for general tasks.
I suspect you could do a similar thing on 360 CPU as well, to maximise the 2nd threads of each core. Put a cooperative job system on the 2nd hardware thread of each core, that picks up VMX128 jobs to assist R500... If your main threads weren't doing much VMX ops, that might actually be a good way to get both good performance...
And this is why developer feedback on the forums is so valuable, especially not in the one-up scenario. Obviously both platforms have some upsides and downsides (some of which they both share in both cases as well), but hearing how developers approach these issues and different solutions to problems to get a shipping product is very informative. Thanks Dean.
Is it really true that Compressed normal maps are an Ati exclusive ?
He is most likely discussing
3Dc. This format retains a fairly high image quality with ~4:1 compression compared to non-compressed 32bit normals (e.g.
Toms and
Neoseekers parroting
ATI press kit info). A lot of software uses DXTC for normal map compression; 3Dc offers better quality with the same memory footprint IRC and is hardware accelerated so there is little impact on performance. Of course 3Dc can be used for other useful things, as ATI demonstrated in their very cool ToyShop demo.
The 360's default drive is 20gb, of which most is used by microsoft, game demos, movies, and all that other cool stuff you can get off of live. Only a small portion of it is useable by games, of which its typically used to cache data temporarily. PS3 on the other hand has 50gb at its disposal to stream in off a dual layer disc.
With the PS3 HDD being standard they also can stream some vital info from the HDD cache as well. I think this is one of the reasons it is standard.
Anyway getting back on topic some more, I think one of the positives to be highlighted here in this thread is the extent to which artwork comes into what consumers perceive, almost exclusively, to be engine-related 'graphics' achievements. We know that art assets are becoming an ever larger component of the costs in this industry, and personally I hope that the appreciation of their role in game creation begins to sink in with the lay-folk that argue the engine side exclusively when they compare in-game images to one another.
You are parroting Acert93 circa 2004 -- for shame!
Comparing last gen (NV2A & GS; EE and a PIII/Celeron; memory architecture and resources) I think it is fair to say that the consoles are a) more similar than disimilar than last gen and b) in regards to performance envelopes they are more similar than disimilar, overall, than last gen. Obviously there are some differences, even big differences, in the current hardware and when doing a "checklist of feature performance" on each platform there are examples when there is a significant chasm in performance. But that said we are reaching certain diminishing returns in regards to certain techniques (e.g. the jump from a 1,000 poly model to a 10,000 poly model is visually more appearant than a 10,000 to 100,000 jump in most games and at the same resolutions; and likewise 2x the cloth physics performance is the difference between a 200x200 mesh and a 141x141 mesh). I do think we will see techniques on Cell that are much slower on Xenon and vice versa; ditto RSX and Xenos. How important, and how often these occur are really a matter of game design and the movement of the industry as well as tool design, what is determined to be sweet spots, and what resonates with consimers. Devs with the time and resources will also find ways to leverage the strengths and offload work to other system resources where there are bottlenecks. And matter of factly most consumers cannot tell the difference anyhow. In the long run we get faced with the reality that while Shrek 2 may have required a magnitude or more of rendering power of Toy Story 1, they both look great. And dare I say the technology in Shrek 1 would not have hindered the visual or creative abilities in Shrek 2 (just different design choices and compromises).
My real hope, as Cal stated in another thread, is that with the next consoles his main concern is how the hardware helps him churn out new, creative games with a lot of high quality content very quickly. At some point we will hit the situation of, "What is better: 1x 2m poly model in 1 month or 4x 1m poly models in 1 month?" This is where having Sony and MS in the market is pretty awesome because on the Sony side you now have a platform (CELL) and the leveraging of proven, robust, and well supported GPU technology (NV, CG, OpenGL) which is tackling this problem from a hardware equation to a large degree. What works on SPEs now should be able to be ported to Cell2, and for near linearly scaling code (e.g. lets say a physics engine) you now have a ton of resources to instantly scale your current platform code. MS is obviously taking the software tool route, as that is their expertise. Yet Sony isn't ignore software (even had a big software dev side aquisition in early 2005; and I often wonder how things would be different if Sony had strongly partnered with, or aquired, Epic) and MS isn't ignoring hardware partnerships. And I think consumers as well as publishers/developers benefit form this. Some pains, yes, but it keeps everyone honest and spurs development and investment into making the products better... Although I still think Panajav had the right idea of Xenos and Cell. The Playstation 360
Anyhow, to a degree I am not confident that next gen will see the flip flop where performance takes a big back seat to creation. This gen there is still a major disparity in performance and technology being deployed by various studios; until we get to a point where we get easily deployable unified lighting and shadowing technologies (I can dream of RT and GI, but that is way not gonna happen in 5 years in realtime), stuff like physics for cloth, water, destroyed objects, etc is not only fast, but extremely fast and versatile on middleware, and a general shift where the tools and middleware allow content creation folks to do most of the design and heavy lifting with a focus on design and less on performance (within reason) I think we will still be in the current situation. Over the next decade, with the emergance of multicore solutions, I think it will become even more tech design oriented because so many resources have to be tailored toward getting the most out of the hardware instead of the hardware just doing what the artist wants.
Maybe we will never get to the place where the hardware and tools just allow people to be creative--draw, design, WYSIWYG, drop in code gaming, etc. I think stuff like XNA is dabbling in such and I think we all want to see more access and production from creative people... now I am OT so I will stop there. But the next decade will be very interesting. Hopefully developers will be pleasantly surprised by the hardware and software MS and Sony offer in 2010-2012.