Gunhead
Hmm, missed that one.
Darren
For RAM, five years ago we were dealing with ~1.4GB/sec bandwith for video cards vs 10.4GB/sec for today(actual numbers, I checked
). That would put high end vid card RAM at about 70GB/sec and all of that is keeping a 128bit bus. Improved crossbar type techniqes, QDR and raw frequency boosts are likely to keep RAM moving as fast as it has been for the last several years. So high end video cards would be packing about 15% more bandwith then what I'm claiming for the XB2, that's pretty close to what happened with XB1.
1080i is 1920x1080 interlaced. I was also figuring for 64bit color, the memory limitations aren't the same as moving to 32 from 16 as there is no need for greater Z accuracy.
DC v PS2, I think there is a rather large rift as of right now. The poly detail is massive on many games. When looking for comparisons between platforms I always look at the best for each, SpyHunter actually looks better on the PS2 then it does on the Cube or Box.
VIA as a CPU provider, right now their best CPU can't keep up with the XBox's unit(FPU), let alone what was available eight or nine months ago.
For the K3 being better in fill intesive situations on a TV for today's consoles- 18,432,000 pixels per second, at 60FPS. How are you figuring an advantage? The Kyro3 will be doing nothing with its pixel pipes for longer then the XB?
VIAs North and South bridge aren't as good as nV's XB solution, and also you have to add the cost of the graphics chip along with a sound chip and the bridge chips, increasing the amount of chips you need per unit.
Missed the Enisgma, will have to check that out.
Read the part on Nintendo wrong. As far as cost goes, this is the first time Nintendo has aimed for low development costs. I am assuming they will be pushing for strong OpenGL support with their next console, neither ATi or IMG are known for that(although that could open the doo for Creative with their 3DL acquisition). Look at the N64 and it was by no means a cheap system. SGI supplied graphics chip and processor, RDRAM, trying to guess if they will put cost ahead of performance next gen depends a lot on how this gen turns out.
DaveB-
You know, there are too many DaveBs on this board
I was figuring for a 128bit bus.
V3
A decade working with visualization.
Vince
You make the exact same mistake that Sony does, assuming that an impressive set of techical specifications in the abstract sense makes for a better console. Let's look at the EE vs the P3 used in the XBox, absolute and utter obliteration for Sony, yet they still get whipped in the graphics department
and cost more to develop for. PS3 should amplify this greatly.
Fully programmable GPU built around a high level API with a decade in refinement and code compilers that have several decades of refinement, versus a completely new architecture where they have to start from scratch completely. This is a much worse scenario then the PS2, there they were using a modified MIPS processor which is something developers had been working with for many years and even then it is taking them years to get the hang of it. With Cell, they have to build compilers and attempt to get threading and load balancing issues ironed out, they have to work with a new instruction set and they have to do this on the fly. If MS was going to try and build a high level API from scratch for a chip that wasn't even built yet for XB2 I'd be saying the same thing about them.
As far as using a "whopping" TFLOP for a rasterizer, I have to assume you've never worked with software rasterizers at any length(high end packages). Using a title like MDK2 which features a pure software rasterizer, although extremely primitive by comparison, a GHZ x86 CPU pushes 1%-2% the framerate that a GeForce1 SDR does, and even then(when using hardware) the limit is still CPU based as the processor can't push the game code fast enough. That is only ~4GFLOPS, so taking that all the way up to a TFLOP you would be between 2.5x-5x faster then a GeForce SDR, but it gets a lot worse for CPUs. Trying to emulate pixel shaders on a CPU you will be closer to 0.1% of comparable time frame dedicated hardware, which means you would be slower then dedicated hardware several generations old.
Looking at the TFLOPS/GFLOPS numbers for a CPU v a rasterizer is useless as dedicated hardware needs a lot less operations to complete the same task that a CPU does. The PS2 is displaying this nicely when comparing its best titles to those on the others, it still has a significantly more powerful CPU has more development time and money spent on it and still can't compete.
Having a code base like nVidia will take years to put together for an entirely new architecture, and even then having a monsterous CPU with primitive rasterizer does not work as well for real time graphics that a mild CPU with monsterous rasterizer does.