NV30 vs R300?

My guess the big money will be in the sub $200 DX9 market. Probably the competitors will be:
- DX9 compliant
- smaller .13 micron chips with 70 or 80 millions transistors
- half the pipelines at 400Mhz core (70% R300 peak performance)
- 64~128MB 128bits DDR or DDR-II (bandwith >= 9.6GB/s)

The sales fuel will be games like Doom3.
 
Yes, that would be amazing...but the soonest we can expect something like that would be next Spring. Hopefully we'll see it then...but I kind of doubt it, unfortunately.

After all, how long has it been with no low-end DX8 cards to speak of? Kind of sad, really...and one thing that I do fault nVidia for...
 
Joe DeFuria said:
I agree with XMas. I see a lot of ATI fans claiming how ATI will be ready with a 0.13 micron part as soon as nVidia releases NV30. Not likely, IMO, unless the NV30 gets delayed all the way to next spring.

Yes, I second that. There will certainly be 0.13u parts of this architecture in the future, but IMO they will target the mainstream segment, not the enthusiast/marketing segment like the R300. The first enthusiast part based on 0.13u will most probably be the R400.

Is there a need for a 0.13u R300 anyway? I don't think so. ATI has the lead, it is NVIDIA which has to catch up, not the other way round.

Even if NV30 wil be significantly faster (which is still very doubtfull), NVIDIA still has to ramp up their DX8 mainstream and DX9 performance parts. In the meantime, if everything goes as intended with the RV300 and RL300, ATI might already have their DX9 mainstream parts ready by that time.
 
Chalnoth said:
After all, how long has it been with no low-end DX8 cards to speak of? Kind of sad, really...and one thing that I do fault nVidia for...
nVidia has the Geforce3 Ti200 and GF4 4400 which range in price between $100 to $160. I'd say that's pretty mainstream. ATi's 8500 LE cards have been under $150 for the past 3 months, and the 64MB model is now under $100. SIS's Xabre is another DX8 alternative.

The market is full of cheap DX8 cards. As a consumer, unless your entirely uninformed, there's no reason NOT to get a DX8 capable card.
 
KnightBreed said:
nVidia has the Geforce3 Ti200 and GF4 4400 which range in price between $100 to $160. I'd say that's pretty mainstream. ATi's 8500 LE cards have been under $150 for the past 3 months, and the 64MB model is now under $100. SIS's Xabre is another DX8 alternative.

The market is full of cheap DX8 cards. As a consumer, unless your entirely uninformed, there's no reason NOT to get a DX8 capable card.

$100 is not low-end. Around $50 is.

It's not about getting DX8 cards in the retail market, either. What's needed is to get them sold in nearly every PC, so that game developers will finally use them as a minimum spec.

It's also about people who do want to upgrade, but just want a video card that, "can play games," and thus try to get the cheapest 3D card they can...
 
90 bytes per vertex is definitely way too much. You only need to store the per-triangle functionals. Those are only about 12 bytes per triangle per interpolated component - so 1 texture + Z + W would be about 48 bytes per triangle.
 
Chalnoth said:
archie4oz said:
You could look into meshify algorithms (I've done some on the PS2, getting as low as .06 to 0.7 verts/tri per mesh patch). With the those sort of ratios you're looking at a 200,000 poly scene consuming only 600KB...

Did you mean .6 to .7? Anyway, yes, it is possible to have a 2:1 tri/vertex ratio. Still, with your 200k poly scene consuming only 600kb, are you only considering vertex position data, or all other vertex attributes? (Like texture alignment, normals, lighting value, etc.).


Deering's generalized triangle mesh method combined with his quantization/huffman encoding technique can do up to 10:1 compression including all the extras (normals, coordinates, etc). It is very quick/simple to decompression. This is implemented in Sun's Java3D (software compress/decompress) and in hardware on Sun's Elite3D card.

Topological surgery and other edge spanning methods are even more amazing.
 
DemoCoder said:
Deering's generalized triangle mesh method combined with his quantization/huffman encoding technique can do up to 10:1 compression including all the extras (normals, coordinates, etc). It is very quick/simple to decompression. This is implemented in Sun's Java3D (software compress/decompress) and in hardware on Sun's Elite3D card.

Topological surgery and other edge spanning methods are even more amazing.

Regardless of how far you compress the geometry, the argument still remains. Geometry power is going to be increasing much faster than fillrate over the next few years. All that geometry compression can do is delay how quickly TBR's become less efficient than IMR's.

Additionally, how much more fillrate do we need than we have today? It doesn't seem to me like we need fillrate to increase very quickly, whereas geometry rates are far below what they should be.
 
Deering's generalized triangle mesh method combined with his quantization/huffman encoding technique can do up to 10:1 compression including all the extras (normals, coordinates, etc). It is very quick/simple to decompression. This is implemented in Sun's Java3D (software compress/decompress) and in hardware on Sun's Elite3D card.

Indeed, Deering's methods was the basis for my work! I was occasionally able to get as high as 12:1...

Regardless of how far you compress the geometry, the argument still remains. Geometry power is going to be increasing much faster than fillrate over the next few years. All that geometry compression can do is delay how quickly TBR's become less efficient than IMR's.

"Geometry power" is pretty vague... Are you referring to vertex count or vertex computation? Mesh density really hasn't really climbed all that much for consumer real-time hardware (consoles withstanding), certainly not at the pace you paint. I mean the set-up engines alone on last year's hardware hasn't even been saturated (in terms of final scene output), and all the vertex calculation hardware you see being packed into today's GPUs is simply going allow (for the most part) developers not to increase geometry complexity in to the stratosphere, but to simply allow them the freedom for more complex calculations without your vertex count dropping like a two-bit whore...

Fillrates (more specifically memory bandwidth) can also be affected (spend too much time on vertex calculation and your pixel engines spending more idle time with nice big pipeline bubbles waiting on the setup engine who's waiting on the vertex engines to give it some vertices to work with). It becomes worse with complex fragment programs as those are worse bandwidth hogs than vertex programs. Eventually you're going to have to operate on more pixels at the same time order to maintain your fillrate while while program lengths grow. Rather ironic when you think about the days when GPUs were 2 pipes and started growing to 4 pixel pipes and everybody lambasted the GS because of it's "outrageous" and "unecessary" 16 pipe pixel engine, yet now look at us as we're halfway there...

Then you're leaving out physical design possiblities. If Sony can mass produce and market a rasterizer with 4MB of embedded memory (at 250nm initiall no less!) and feasibly produce 32MB on 180nm, then the possiblities for something as svelte as PowerVR on 130nm or 100nm starts looking feasable with 4MB (or 8, 16, or 32 even). With proper compression techniques you can fit quite a bit in 4-8MB let alone 16 or 32MB.

Besides, deferred renderers still have a place in the low-cost consumer sector. I'm still downright amazed Intel hasn't tried to license PowerVR into their northbridges (where the real market share comes from)...
 
Back
Top