A thought on X-Box 2 and Creative Labs.

V3 said:
1TFLOPS CPU without rasterizer, isn't going to deliver anything significant in real time graphics.

Um, ok.

PS. Last time I checked, 1TFlop was roughly equal to:

500 Pentium4's @ 1.5Ghz, or
675 nodes of a Linux/MPP server cluster (@ 1 GHz/node), or
40 SGI Origin 3800 servers (32 nodes @ 400 MHz/node), or
17 Sun Enterprise 10000 servers (64 nodes @ 466 MHz/node)

Besides, as PSurge stated, the 'P10', which is arguably the most advanced rasterizer out there... only peaks at 170GFlops.

Your right, lets put a Kyro3 in there :rolleyes:

PS. Ohh, and because it's software, it's dynamic and adapatable to the type of game and/or scene your in.. Your not wasting away fixed function logic like you do in the current implimentations. (ie. FSAA, or full pixel shader use on nV2A) the possibiblities are huge.
 
Besides, as PSurge stated, the 'P10', which is arguably the most advanced rasterizer out there... only peaks at 170GFlops.

Yes and that is going to be done by the end of the year. Come end 2005, 1TFLOPS wouldn't look that big anymore.

Anyway, those Cell talk mentioned nothing about graphics, what Kutaragi-san mentioned, that he would like to eliminate the server/client scheme in networking. He was saying that this idea is too old, and creates the bottleneck.

So I guess, with PS3, there won't be a server you connect to when playing on line games, there just be cluster of some sort, so this might increase, in the number of on-line user interacting with each other in some games, instead of having them put on different server.
 
The PS3 CELL news reports are just FUD against Xbox and GameCube. Sony has a real problem because the PS2 was originally sold as the most powerful console, and now it's the weakest console. By claiming that their next processor will be 1000 times faster, they get to claim the performance high ground, compared to the other consoles, which are "only" 2 or 3 times faster.

Sony's very good at playing the perception game!

Regarding Xbox 2, I would guess they'll do a bake-off between all the different graphics and CPU vendors, the same as for the Xbox 1.

One very important benefit of choosing NVIDIA for the Xbox 1 was that developers were able to develop using NV15 and NV20 cards well before the Xbox hardware was ready. This lead to much better launch titles than the PS2 or the GameCube had, which helped Xbox establish itself.
 
duffer said:
One very important benefit of choosing NVIDIA for the Xbox 1 was that developers were able to develop using NV15 and NV20 cards well before the Xbox hardware was ready. This lead to much better launch titles than the PS2 or the GameCube had, which helped Xbox establish itself.


Establish itself? No way. Sales are bad due to price but also due to a lack of good titles.
 
bbot said:
duffer said:
One very important benefit of choosing NVIDIA for the Xbox 1 was that developers were able to develop using NV15 and NV20 cards well before the Xbox hardware was ready. This lead to much better launch titles than the PS2 or the GameCube had, which helped Xbox establish itself.


Establish itself? No way. Sales are bad due to price but also due to a lack of good titles.

Sales are below forecasts. In the US it has done ok. In Europe and Japan it really got clobbered. The system already has two 1 million seller games, it tooks the PS 2 a while before it had this happen and it sold 1 million units on launch day.

I think Microsoft has done a great job at getting good games out for the X-Box.
 
Gunhead

Hmm, missed that one.

Darren

For RAM, five years ago we were dealing with ~1.4GB/sec bandwith for video cards vs 10.4GB/sec for today(actual numbers, I checked ;) ). That would put high end vid card RAM at about 70GB/sec and all of that is keeping a 128bit bus. Improved crossbar type techniqes, QDR and raw frequency boosts are likely to keep RAM moving as fast as it has been for the last several years. So high end video cards would be packing about 15% more bandwith then what I'm claiming for the XB2, that's pretty close to what happened with XB1.

1080i is 1920x1080 interlaced. I was also figuring for 64bit color, the memory limitations aren't the same as moving to 32 from 16 as there is no need for greater Z accuracy.

DC v PS2, I think there is a rather large rift as of right now. The poly detail is massive on many games. When looking for comparisons between platforms I always look at the best for each, SpyHunter actually looks better on the PS2 then it does on the Cube or Box.

VIA as a CPU provider, right now their best CPU can't keep up with the XBox's unit(FPU), let alone what was available eight or nine months ago.

For the K3 being better in fill intesive situations on a TV for today's consoles- 18,432,000 pixels per second, at 60FPS. How are you figuring an advantage? The Kyro3 will be doing nothing with its pixel pipes for longer then the XB?

VIAs North and South bridge aren't as good as nV's XB solution, and also you have to add the cost of the graphics chip along with a sound chip and the bridge chips, increasing the amount of chips you need per unit.

Missed the Enisgma, will have to check that out.

Read the part on Nintendo wrong. As far as cost goes, this is the first time Nintendo has aimed for low development costs. I am assuming they will be pushing for strong OpenGL support with their next console, neither ATi or IMG are known for that(although that could open the doo for Creative with their 3DL acquisition). Look at the N64 and it was by no means a cheap system. SGI supplied graphics chip and processor, RDRAM, trying to guess if they will put cost ahead of performance next gen depends a lot on how this gen turns out.

DaveB-

You know, there are too many DaveBs on this board ;)

I was figuring for a 128bit bus.

V3

A decade working with visualization.

Vince

You make the exact same mistake that Sony does, assuming that an impressive set of techical specifications in the abstract sense makes for a better console. Let's look at the EE vs the P3 used in the XBox, absolute and utter obliteration for Sony, yet they still get whipped in the graphics department and cost more to develop for. PS3 should amplify this greatly.

Fully programmable GPU built around a high level API with a decade in refinement and code compilers that have several decades of refinement, versus a completely new architecture where they have to start from scratch completely. This is a much worse scenario then the PS2, there they were using a modified MIPS processor which is something developers had been working with for many years and even then it is taking them years to get the hang of it. With Cell, they have to build compilers and attempt to get threading and load balancing issues ironed out, they have to work with a new instruction set and they have to do this on the fly. If MS was going to try and build a high level API from scratch for a chip that wasn't even built yet for XB2 I'd be saying the same thing about them.

As far as using a "whopping" TFLOP for a rasterizer, I have to assume you've never worked with software rasterizers at any length(high end packages). Using a title like MDK2 which features a pure software rasterizer, although extremely primitive by comparison, a GHZ x86 CPU pushes 1%-2% the framerate that a GeForce1 SDR does, and even then(when using hardware) the limit is still CPU based as the processor can't push the game code fast enough. That is only ~4GFLOPS, so taking that all the way up to a TFLOP you would be between 2.5x-5x faster then a GeForce SDR, but it gets a lot worse for CPUs. Trying to emulate pixel shaders on a CPU you will be closer to 0.1% of comparable time frame dedicated hardware, which means you would be slower then dedicated hardware several generations old.

Looking at the TFLOPS/GFLOPS numbers for a CPU v a rasterizer is useless as dedicated hardware needs a lot less operations to complete the same task that a CPU does. The PS2 is displaying this nicely when comparing its best titles to those on the others, it still has a significantly more powerful CPU has more development time and money spent on it and still can't compete.

Having a code base like nVidia will take years to put together for an entirely new architecture, and even then having a monsterous CPU with primitive rasterizer does not work as well for real time graphics that a mild CPU with monsterous rasterizer does.
 
For XBOX or any other new console to succeed it needs a steady, yet consistent, (lets say one a month), stream of 'killer apps' i.e. AAA quality games and not a huge influx of 2nd rate titles.

This is where PS2 initially failed and now has got what it needed and where Nintendo are making sure they do not make the same mistake as N64.
 
IMO, Sony is just making the same mistake that it did last time by designing a system that will be inherently difficult for programmers. If you look at the games on the PS2, a lot of them don't live up to the power that is behind the PS2... not just talking about launch games, but current games also. Let's call a spade a spade. MS had perfect execution. They did everything right, besides making the mistake to overprice the Xbox overseas. Why then, is it not outselling the PS2? Brand recognition my friend... and the fact that PS2 has been out a year longer and has more games to select from. Despite the less than spectacular sales, indications are that the Xbox will ultimately be more popular the the GC! That's incredible for MS's first console! Also, don't forget that MS just recently cut the price of the Xbox in Europe and Japan. Early indications are that Xbox sales have spiked due to this. We'll see how this turns out though...
 
The chances of Sony doing 1 TFLOPS with a chip of the same size as the EE at .1u is somewhere between slim and none. That would be an improvement of 160 times, with the process giving about a 3 times density increase and lets be generous and say it will run at 6 times higher frequency that still leaves a performance increase of ~9 times unnaccounted for. My guess it will be 200 GFLOPS or lower.

Even 100 GFLOPS of programmable power would still be enormous though, dont forget that the P10 only has around 6 GFLOPS to burn in its geometry unit ...

The point is that Sony can grow the power for its processor at the same accelerated rate as GPU's, desktop processors with their Moore's law wont be able to match that ... and with Sony getting access to world class process technology and eDRAM competitors will have a hard time matching their graphics unit too (gate delay and eDRAM density of TSMC/UMC compared to IBM are pretty sad).

This is based on logic, not hype ... it maybe hard to program, but this time around it will be faster as desktop processors by a far greater ratio than the original EE was at release. Developers wont be able to resist that kind of power, they will adapt and Im sure Sony will give them the tools needed to do so ... IMO Sony should switch to Occam ;) (for Simon: the latest incarnations can even use references now and still guarantuee zero aliasing, so you can make linked lists to your hearts content :).
 
A decade working with visualization.

Well being involved in alot of scheduler work, I was actually kinda anxious, to see what those 3 companies going to achieve, in their new networking paradigm.

If they indeed manage to remove, client/server relation like they said they are aiming for, that's going to be huge. It might eats up to the server sector, and might put MS .NET plan into questions.

But I remain skeptical, I don't think they pull it off.

From what I am seeing now from all the abstract info, PS3 will probably contained

EE 2
GS 2
CELL (functioned as a communication processing unit for transfering data).

That would be the main processor, with the rest typical sound/IO processor be in there somewhere.
 
Well ben, a 128bit bus would have to be operating at 3Ghz for 50GB/s of bandwidth. That aint gonan happen, you need at least a 256 bit wide bus but then you are still talking 750Mhz/1.5ghz DDR memory. 325Mhz QDR with a 256bit wide bus would manage that, the expense would be rediculopus though. Wider busses increases the minimum amount of memory chips you can have too which will also drive up expense.

Dave
 
Ain't gonna happen? Price will be ridiculous? Don't underestimate the mother of invention: necessity.

Think back about 5 years ago, as ben was saying. At the time, the best single chip solution, bandwidth wise, was the Riva 128. 128 bit, 100 Mhz, SDRAM.

Approx 1.6 GByte/sec.

Now, if five years ago, I told you we would have 10 GB/sec by 2002, you would probably answer the same way you are now: ain't gonna happen...too expensive...to complex. All the same arguments about complexity, chip density, traces, etc. you make now would have applied then too. The "impossibility" of affordable raw bandwidth is what the PowerVR / Deferred rendering camp was pretty much counting on to bolster the future of the bandwith saving technology. Raw bandwidth simply would not be remotely affordable to keep up with GPU demand.

But don't feel bad. There weren't too many people 5 years ago (myself included) that probably though 10 GB/sec with "conventional external ram" would be a reality today. ;) Microsoft's entire Talisman project was based on the "no way will that kind of bandwidth be affordable any time soon" belief.

But here we are, 5 years later sitting on 10 GB/SEc, and reportedly on the virge of 20 GB/sec.

50+ GB/sec in another 4-5 years time? I wouldn't be surprised at all. That's not to say I think it will be an easy road, but it seems that every time we think we've hit a "bandwidth wall" the past 5 years, IHVs have found some way through it. :)
 
For RAM, five years ago we were dealing with ~1.4GB/sec bandwith for video cards vs 10.4GB/sec for today(actual numbers, I checked ). That would put high end vid card RAM at about 70GB/sec and all of that is keeping a 128bit bus

Maybe five years ago yeah, but XBox will need to be ready to go in about 4 years from now.. its specs will certainly need to be finalized in less then 4 years.

4 years ago 2.75gb/s was the highend video bandwidth, today its 10.15gb/s with DDR, around 8gb/s effective since DDR is less efficient then SDR. Theoretically bandwidth for the highend has increased by 3.7 times in the last 4 years, meaning 37.5gb/s for highend ram in 2006. Effectively its increased by about 2.9 times when factoring in the fact that DDR is not actually twice as fast as SDR. I think 30-35gb/s is a decent guess for 2006. 50gb/s sounds over the top to me and frankly 70gb/s is insane IMO ;). Unless you expect us to have 560mhz 512bit DDR ram less then four years from now?.. I don't.

1080i is 1920x1080 interlaced. I was also figuring for 64bit color, the memory limitations aren't the same as moving to 32 from 16 as there is no need for greater Z accuracy.

Yeah I realise that, still at 1920x1080x64 4xFSAA with a 32bit compressed Z-buffer and around 5 overdraw at 60fps we're still looking at around 28gb/s only for the framebuffer and Z-buffer. Now add CPU bandwidth. texture bandwidth, geometry bandwidth ect. If your right about 50gb/s for highend video ram in 2006 then XBox 2 is going to need that highend ram for those settings (also what if people want to move to 100fps for consoles?).

However as I've said I don't even agree with 50gb/s video ram in 2006. Even if I did the cost factor is still their as we're talking about XBox 2 using highend cutting edge 50gb/s ram (that I don't even think will excist :)). With a PowerVR design it could do 1920x1080x64 4xFSAA at 60fps (outputing at 32bit) with only 500mb/s bandwidth for the framebuffer and no Z-buffer! Which means it could use extremely low end ram, or normal low to mid end ram which means a large price difference as well as having loads more bandwidth available for textures, CPU ect. You could even output in 16bit and take the bandwidth needed down to an incredible 237MB/s, and you'd still have done all rendering at 64bit so it would still look quite good.

SpyHunter actually looks better on the PS2 then it does on the Cube or Box.

Opinions I've heard say that Xbox looks best, GameCube second and PS2 third.

VIA as a CPU provider, right now their best CPU can't keep up with the XBox's unit(FPU), let alone what was available eight or nine months ago.

Their still improving though, in 4 years they should have caught up quite a bit, not all the way with AMD and Intel obviously but they wouldn't need to.

For the K3 being better in fill intesive situations on a TV for today's consoles- 18,432,000 pixels per second, at 60FPS. How are you figuring an advantage? The Kyro3 will be doing nothing with its pixel pipes for longer then the XB?

I was talking about 640x480x32 4xFSAA not just 640x480x32. Also your not factoring in overdraw. With the limited shared main memory bandwidth available to XBox the Kyro III would push allot more fps and leave more texture bandwidth left over. As well as being a cheaper chip. You could even use higher FSAA levels like 8xFSAA and still only need the same 500MB/s for the framebuffer and no Z-buffer!

VIAs North and South bridge aren't as good as nV's XB solution, and also you have to add the cost of the graphics chip along with a sound chip and the bridge chips, increasing the amount of chips you need per unit.

I'm not sure what you mean here, wether Nvidia do XBox 2 or IMG/VIA their's always going to need to be a North/South bridge as well as a CPU and graphics and sound chips.. so what's your point here? What I'm saying is VIA (partnered with IMGTEC) have everything covered, they can do great motherboards and have allot of experience in that area, they are moving forward in CPU design, they'd have graphics and sound from IMGTEC, they could produce just about everything needed for XBox 2 by themselves.

I am assuming they will be pushing for strong OpenGL support with their next console, neither ATi or IMG are known for that(although that could open the doo for Creative with their 3DL acquisition

Why would they be pushing for strong OpenGL support in their next console? As for IMGTEC not being known for OpenGL support, until they were known for poor OpenGL support, but Kyro changed that, Kyro II's OpenGL drivers are very impressive.

Now, if five years ago, I told you we would have 10 GB/sec by 2002, you would probably answer the same way you are now: ain't gonna happen...too expensive...to complex.

We don't have 10GB/s though, theoretically we do, but in effect when compared to SDR ram DDR is not twice as fast. We have more like 8GB/s bandwidth now when compared to the 1.56gb/s we had 5 years ago or the 2.75gb/s we had 4 years ago (and I think that's being generous to DDR's efficiency).

I suppose in 5 years 50GB/s could be possible as highend cutting edge video ram (more like 40gb/s thought IMO), but certainly not in 4 years or even less IMO. Were most likely looking at 35GB/s at the start of 2006 IMO... maybe 40GB at most at the very top end.
 
If bandwith was not an issue we would have perfect quality edge anti-aliasing, developers wouldnt need to worry about structuring everything right to hit the caches ... etc etc.

Just because the hardware and software is adapted to deal with the available bandwith does not mean we would not be in a far better position with more of it right now.
 
Think back about 5 years ago, as ben was saying. At the time, the best single chip solution, bandwidth wise, was the Riva 128. 128 bit, 100 Mhz, SDRAM.

Riva TNT was using 90MHz SDRAM so I don't know where you got your figures for the Riva 128... sorry had to nit pick :)

Necessity X Cost
/ Technological Feasible Level = Real World Specs(RWS) rather than Peak Performance Specs(PPS)

:p
 
I didn't say bandwidth wasn't an issue. Nor did I say we wouldn't be in a better position if we had a part that used it more "efficiently". I've been begging for someone *cough, PowerVR, cough* to build a deferred renderer that utilized the latest memory available for what, 5 years now? ;)

I am only stating that many seem to paint a "bleak" picture for available raw bandwidth for the future. They have been painting this picture for the past 5 years, making the argument for architectures like deferred rendering sound more like a "necessity" than an "improvement" when advancing graphics performance. At this point, one has to start questioning if the bandwidth "wall" will actually happen, rather than assume it will happen in the near future.

Riva TNT was using 90MHz SDRAM so I don't know where you got your figures for the Riva 128... sorry had to nit pick

Heh...how about this for a blast from the past: ;)

http://www.anandtech.com/showdoc.html?i=65

Riva128 was using, 128 bit, 100 Mhz S(D/G)RAM. And actually, although the original TNT core was clocked at 90 Mhz, The ram was clocked at 110. Be careful when you nit-pick! ;)
 
I dont know "they" so I cant speak for them. The bandwith "wall" never went anywhere, its always been a bottleneck and will be for a while yet.

The success or failure of given specific chips prooves nothing but their individual quality, you can try to extrapolate that to the quality of the underlying principles of the architectures ... but in the end your extrapolation is no more rooted than my speculation IMO :) Cant proove the negative and all that.
 
but in the end your extrapolation is no more rooted than my speculation IMO :)

Absolutely! ;) Question though, if you're not "one of them" what exactly is your speculation? Simply put, my own extrapolation is that bandwidth will be "no more of a problem in 5 years, than it is a problem now." Meaning, "traditional" immediate mode type renderers will continue to thrive for the forseeable future.

That's not saying that bandwidth isn't a problem now. Of course it's a bottleneck. But the point is, the situation doesn't seem to be getting progressively worse with each generation of product. (That was the thought 5 years ago.) "Effective Bandwidth" has kept up with increasing demand from GPUs.

It kinda reminds me of Moore's law. No one claims that transistor density isn't a "problem." That's the bottleneck for more powerful processors. And every year it seems there's a new "wall" placed on fabrication processes. "After XXX microns, we'll be hitting a wall and will need some new radical approach to increase transistor density". And it seems every year that point gets pushed further away as new evolutionary ways are discovered and implemented.
 
I stand corrected Mfa

... my bad :)

Since this is 3D Hardware and not just related to Consoles on the PC side I think the AGP bus is fast becoming the next hurdle.
 
Back
Top