DaveB-
Well ben, a 128bit bus would have to be operating at 3Ghz for 50GB/s of bandwidth. That aint gonan happen, you need at least a 256 bit wide bus but then you are still talking 750Mhz/1.5ghz DDR memory. 325Mhz QDR with a 256bit wide bus would manage that, the expense would be rediculopus though.
'96 100MHZ SDR 1.49GB/sec
'01 300MHZ DDR 8.49GB/sec
'06 900MHZ QDR 53.64GB/sec
That's if we stick to a 128bit bus.
Darren-
Maybe five years ago yeah, but XBox will need to be ready to go in about 4 years from now.. its specs will certainly need to be finalized in less then 4 years.
Four and a half. Let's see where bandwith sits in November if you want to shave a year off
Yeah I realise that, still at 1920x1080x64 4xFSAA with a 32bit compressed Z-buffer and around 5 overdraw at 60fps we're still looking at around 28gb/s only for the framebuffer and Z-buffer. Now add CPU bandwidth. texture bandwidth, geometry bandwidth ect. If your right about 50gb/s for highend video ram in 2006 then XBox 2 is going to need that highend ram for those settings (also what if people want to move to 100fps for consoles?).
1080i is interlaced, your bandwith numbers are double what they should be. Moving over 60FPS on a console? TVs don't do that well when you disable VSync, until the standards are revised sometime around thirty years from now your talking closer to 14GB/sec. Bandwith to spare. And of course, you are assuming that they won't be moving to an embedded framebuffer. Given Moore's law, that should be fairly simplistic(the N64 had 4MB total RAM, the GC has 75% embedded).
Even if I did the cost factor is still their as we're talking about XBox 2 using highend cutting edge 50gb/s ram (that I don't even think will excist ).
50GB/sec I think is very conservative, wait and see where we are sitting at the end of this year.
With a PowerVR design it could do 1920x1080x64 4xFSAA at 60fps (outputing at 32bit) with only 500mb/s bandwidth for the framebuffer and no Z-buffer! Which means it could use extremely low end ram, or normal low to mid end ram which means a large price difference as well as having loads more bandwidth available for textures, CPU ect.
That sounds real nice. What about the fact that you will likely be dealing with more polygons then pixels? Changes the bandwith factoring quite a bit.
Opinions I've heard say that Xbox looks best, GameCube second and PS2 third.
Massive model corruption detracts a bit more for a game then those people I guess
I was talking about 640x480x32 4xFSAA not just 640x480x32. Also your not factoring in overdraw. With the limited shared main memory bandwidth available to XBox the Kyro III would push allot more fps and leave more texture bandwidth left over.
100Million polygons per second. Rely on fillrate on a console and any vanilla PC is going to throttle you.
I'm not sure what you mean here, wether Nvidia do XBox 2 or IMG/VIA their's always going to need to be a North/South bridge as well as a CPU and graphics and sound chips.. so what's your point here?
nV supplys this all in their cost. They have combined functionality.
Why would they be pushing for strong OpenGL support in their next console? As for IMGTEC not being known for OpenGL support, until they were known for poor OpenGL support, but Kyro changed that, Kyro II's OpenGL drivers are very impressive.
Development costs are skyrocketing, dev houses are laying people off and closing up because of this. Backing a high level API with a decade of refinement behind it along with widespread developer knowledge of it makes it the only logical choice outside of DirectX
KyroII has good OpenGL drivers? They must have improved a staggering amount since I had one.
We don't have 10GB/s though, theoretically we do, but in effect when compared to SDR ram DDR is not twice as fast.
Compare the GF2 MX 400 to the GF4 MX. Crossbar makes a big difference.
Joe-
? Simply put, my own extrapolation is that bandwidth will be "no more of a problem in 5 years, than it is a problem now
What are the big bandwith problems left? Currently we have moved from 640x480x16 @30FPS five years ago to 10x7x32x4x @60FPS+ today with increasing bandwith needs for rasterization needs on top of the massive increase we have seen in resolution. We get to 1600x1200x64x4x and then what is left? The increases in bandwith needs due to increasing resolution demands has significantly outpaced those on the rasterization side, not too much longer and that end of bandwith will not be an issue.
Vince-
Umm.. no. Thats perhaps the most rediculous comparason possible. MDK2 wasn't designed for a software rasterization pipeline and it shows. It [in software] is just emulating the hardware (OGL) specs that they designed to and calls that use...
Could have used the Lightworks render engine, MentalRay or Renderman as examples also, although they are significantly slower then that used in MDK2 and they are designed to run in software.
Do you not read my posts? I don't even think they're will be fragment shading in 5 years, so wheres the problem? Sonys backing the Stanford based, Real-Time High Level programmable Shading project... my guess is they're doing that for a reason.
How many years in development? How many years left? That will be real great for game development don't you think? Moving to spline based/HOS/Geometric LOD system is something that works for hardware too.
Nobody's designing for a fully software drived 3D pipe, yet
That's how 3D started, it actually hasn't been that long that hardware support has been around. Gaming didn't create 3D.
Besides, if your drawing a 80,000 polygon mesh/character, whats wrong with shading at a per vertex level?
Where should I start? Filtering is the most serious issue. Rely on per vertex shading and you will need to revert to a rather heavy geometric LOD system with differing vertex shading characteristics at each level of tesselation or run into massive aliasing issues. Then you have the difficulty dealing with multiple environmental effects on every vertex, assuming you want to drop pixel shader support due to the difficulty of emulating them using non dedicated hardware. If you don't, then your screwed with the weak rasterizer support anyway so you are better off trying to force it through using some sort of vertex shading I would assume.
So you chew up a load of processing power on geometry, and then chew up more on a geometric LOD system, then chew up some bandwith along with more CPU overhead to utilize an alternating vertex shader scheme based on distance tied in with your LOD system, then amplify your T&L load significantly by relying on extremely complex vertex shader routines, which you will need to have six or more of at least to avoid serious aliasing issues. Building for the code for all that will be real simple though, right? Particularly using a completely new architecture with a new instruction set to learn, new register architecture and primitive compilers on top of having massive multi threading issues to work around.
When the size of the average polygon nears that of a pixel, your almighty hardware rasterizer breaks down
Why in the world do you think that? I can only assume that you have never worked with sub pixel sized polys on a hardware rasterizer.
Archie-
Do they make that mistake? The 'numbers' delivered are rather solid numbers based on specific cases of what the hardware 'will' do.
I'm not talking about the specifications. I'm talking about something being more impressive from an overall engineering standpoint compared to being a better gaming platform. The XBox is easily the most plebian and boring design out of all the consoles with the PS2 clearly being the most impressive from an engineering standpoint. Doesn't help it.
This is a bit of a reach because you're basing the judgment of a discrete peice of silicon based on the product of the whole?
That is my part of the discussion. Having "Cell" by itself will not make the PS3 better then the others, even if they do have a significantly weaker processor. Add in increased development costs and does it make for a better platform?
Should. The natural trend is for development costs to increase. MS and Nintendo have gone through great lengths to reduce this, Sony is clearly more interested in engineering accolades.
I think you have a bit of a misunderstanding of where the difficulties of the PS2 lie. Even with the funky register layout, and MMI instructions, the MIPS based EEcore is hardly something developers are having difficulties with (if they are then they have other serious issues to work out).
That
IS the point.
Well to some extent they are with DX9 (as is OpenGL 2.0)
To some extent with backwards compatibility built on top of work already doen
Well the problem with this assumption is you're using an x86 processer (well actually the PC architecture as a whole + software environment) as a basis for an argument for against designing hardware for target that poses almost none of the design constraints face by those designing hardware for the PC market or products that are going to share architectural aspects across various markets.
Which shows itself off real well with the Athlon throttling the current comparable MIPS processors in SGI workstations running render tests under Maya. With the exception of the IR class machines and the like x86 PCs have closed the gap with non PC hardware designed for 3D natively.
You're also contradicting yourself a bit by extolling the virtues of a "fully" programmable GPU vs. a software rasterizer (assuming you're talking about total rasterization and not just setup), since code running utilizing a "fully programmable" GPU is essentially a software rasterizer in itself (excepting certain fixed-functions like setup).
The difference is in the level of load it will place on dedicated hardware that is designed explicitly with a set of limited functions, those dedicated to graphics, versus a 'general purpose' CPU which is designed for protien folding.
Could you elaborate more on what you're trying to point out there? It's seems a bit too much of a broad generalization.
Pulling off the same effects that pixel shaders handle on a CPU using software rasterization is roughly 0.1% the speed. You can test this using the MS DX software rasterizer or compare visualization render engines and time the impact applying certain effects has on a render.
I think you really need to realize just how much computation a teraFlop is... Neither the GScube (the 16 that I've used, and the 64 that I've seen, but not used), nor the SX-4 and 5 that I got to mess with in school were acheiving even half that much computation. And I have yet to see anything done in real-time on any current GPU that comes close (the P10 will be interesting though).
Extrapolating out a TFLOP from a GFLOP based on the dozens of software render engines I've used it works out to not even close to being competitive to hardware rasterizers in three years following the current trends. TNT v NV2A. On a GFLOP CPU(actually, a GHZ Athlon with a max GFLOP rating of 4) I've seen what kind of frames I can render out in 3 minutes, 1/10,800 of real time. When the Sony die hards were saying 6TFLOPS in 2003, which they were, I still wasn't impressed in terms of what it could do compared to hardware rasterizers(my three minute test would cover that also, in theoretical terms it would cover up to 40TFLOPS). Of course, I'm using render engines that have only had roughly a decade of refinement and tweaks to perform at their maximum on x86 hardware. You get better anti aliasing and filtering then we currently have by a sizeable margin, but lack the level of model complexity and effects unless you want to push the render times over the three minute mark.