NV30 vs R300?

I know ATI has rumored that they have .13 engineering samples even now, so I still say the refresh part will be sometime in the Q1 of 2003 for a Spring release..of course I'm just guessing.
I also hear of this dual chip combo or Maxx setup....R400 ??
 
ATi don't need to be "ahead" all the time to regain market share. They just need to make sure the that when they are one step in front of nVidia (as will be the case for the next few months) they really make the most of it.

Yup. Best case scenario for consumers AFAIC, is ATI and nVidia keep "one-upping" each other with every new release. :)

I also cannot see NV30 being a "significantly" better part. All IMO of course, but I don't see any real opportunity for nVidia to leverage any "new rendering features" as some real product seller. As NVIDIA themselves already stated about products coming this fall: "It's either DX9, or it's not." Well, R-300 is DX9, so what else could NV30 offer that is of real significance to the consumer as far as rendering goes?

Basically, all it can really offer, IMO, is speed....in one of two ways, or both.

1) Raw performance. I don't see that much opportunity here. Looks like R-300 is making pretty good use of 20 gb/sec bandwidth, not sure how much more performance per gb can be gotten out of it....this is assuming NV30 is based on a 20 gb/sec bandwidth type design to begin with.

2) "Special" AA features. This is perhaps one area that I can see nVidia having a chance to really differentiate their product. R-300's AA looks to be pretty damn good, but we know there are other solutions (like matrox FAA) that are shoing good potential. If nVidia can figure out a way to do something equivalent to 16X AA, with the performance impact that R-300 can do 4X AA, that would be a good selling point.

Of course, there are too many unknowns to do much of any real speculating. Maybe nVidia IS doing a "gigapixel" like design, at which point the age old question of "can deferred renderers really compete in the PC market" might finally be answered. Maybe nVidia IS planning on multi-chip boards for the high-end gamer. Maybe they invented their own brand new memory type that delivers 300 GB/sec bandwidth. Who knows. ;)
 
Above said:
I will choose nVidia over ATi for the foreseeable future. Many here are 3D enthusiasts enchanted by numbers, but to me it is all about playing games. Things must work. I like the step in the right direction of the GF4, to image quality improvements transparent to the game itself with FSAA at lower cost. Mindful of the 3dfx acquisition, I am looking forward to the NV30.
I think it very ironic to look at the features being touted nowadays, and then look at how basic the current games are supposed to be. Perhaps in 3 years when games use these new cards I will buy one.

I think the point of the 'gamers' here is that a game coming out next month (UT2003) which cripples cards, when set to high detail level, of less than Gf3/8500 calibre is perfectly playable on high details/4xAA/16x AF at 10x7 or 12x10 on the R300, (which is 2.5 to 3x the performance of the NV25).

The 'gamers' here dont just want high QIIIA performance with no AA/AF like most gaming websites look for. They want high rez, high quality AA and AF and performance to spare for the future.

So far the R300 seems to have everything the developer needs/wants for 3d games and performance to keep the hardcore happy now. There isnt any need to wait for DX9 games in 2 years time to see the benefits of the R300 over current hardware now, on current games.

The only thing is price and waiting for real reviews of final product, IQ and drivers and of course price.

I cannot see how your arguement actually hangs in favour of any other IHV's product over the R300 at this moment in time with what is known to date.
 
If ATI is planning on an R-300 maxx setup, I would guess we weouldn't see it until they can build one based on a 0.13 micron R-300 variant. I would think a dual 0.15 R-300 board would just be way too power hungry.

Wow...dual 256 bit busses on one board.... :eek:

I would also hope that ATI ditches their previous AFR rendering scheme (as MfA mentioned in another thread) and has some other technology. I was never a fan of AFR. I liked 3dfx's SLI solution, as well as PowerVR's "alternate tile" type solution for deferred renderes. (And we all take a moment of silence as we fondly remember petitions for a multi-chip PowerVR PC part...)
 
Joe DeFuria said:
If ATI is planning on an R-300 maxx setup, I would guess we weouldn't see it until they can build one based on a 0.13 micron R-300 variant. I would think a dual 0.15 R-300 board would just be way too power hungry.

Wow...dual 256 bit busses on one board.... :eek:

I would also hope that ATI ditches their previous AFR rendering scheme (as MfA mentioned in another thread) and has some other technology. I was never a fan of AFR. I liked 3dfx's SLI solution, as well as PowerVR's "alternate tile" type solution for deferred renderes. (And we all take a moment of silence as we fondly remember petitions for a multi-chip PowerVR PC part...)

I think a dual .13 micron board still requires a external power.

Have anyone heard about ATI´s NA1 ? IIRC I saw it last year in the MaxiumPC magazine ATI´s roadmap (R200, R300 and NA1).

What about a 256bits DDR-II deferred rendering DX9 .13 micron chip :eek: :eek: :eek:
 
DemoCoder said:
Beating them to launch right now is somewhat of a minor victory, because the real battle will happen this Christmas/Newyear season. The .15um fall launch of the R300 is if anything, a PR move.
I am no marketing expert but alot of the freshman coming in to my University have new computers. So freshman orientation in my state starts in late August. Would any gamer students be able to pick up a R300 by then?

One additional thing I just thought of. Wasn't a big selling point for the original GF3 that it was the first DX8 development platform. So if there are any developers who want to get a jump on DX9 they would start with the R300.

Anyway I know even less about marketing than 3d so someone please educate me.
 
I would guess probably not. Probably early September.

ATI has "promised" to ship the 9700 by August 18th. However, IMO, that means "ship from the factory"....then take a few days to a week or so for it to start appearing in the seller's inventory, and allow for another week or so for "unforseen" cicumstances which always seem to surround the release of any product from any vendor...

They will almost certainly be able to order a 9700 by then, but I wouldn't count on actually being able to pick one up.

EDIT: Oh...and freshman college students picking up a 9700?! Back when I went to school, colleges didn't pay kids to go to school, it was the other way around. ;)
 
2) "Special" AA features. This is perhaps one area that I can see nVidia having a chance to really differentiate their product. R-300's AA looks to be pretty damn good, but we know there are other solutions (like matrox FAA) that are shoing good potential. If nVidia can figure out a way to do something equivalent to 16X AA, with the performance impact that R-300 can do 4X AA, that would be a good selling point.

Why?

If I understand this correct, the R300 uses some form of RGMS and so the 4xAA is more like an 4x4 OGMS, or some form of 4x4 sparce supersampling ( at the polygon edges only ). So IMHO the AA quality of the R300 should be (at least nearly) as good as the 16x FAA from Matrox using an 4x4OG sampling. But the 4xAA of the R300 will not only "work" at polygon edges like the FAA from matrox, but also on intersecting polygons (if it works like the GF4-MSAA ) and so the overall quality should be higher.
 
Well, first of all, I don't think Matrox uses an "orderd grid" for their 16x FAA implementation either. (Anyone know for sure?)

Second, while I would agree that a 4X "rotaged grid" or "pseudo random" sample pattern would certainly be of higher quality than a 4X ordered grid, It would likely NOT be as good as 16X ordered grid. It would be somewhere between 4X and 16X ordered grid.

Finally, I'm NOT saying that if nVidia just implements Matrox's solution, then nVidia has the edge in AA. I'm saying that there IS room to improve upon ATI's AA, and if nVidia can do it, that could be a good selling point. I just gave Matrox as an example of a "different" AA solution that has at least some advatages over ATI's solution. :)
 
Joe DeFuria said:
Well, first of all, I don't think Matrox uses an "orderd grid" for their 16x FAA implementation either. (Anyone know for sure?)

Second, while I would agree that a 4X "rotaged grid" or "pseudo random" sample pattern would certainly be of higher quality than a 4X ordered grid, It would likely NOT be as good as 16X ordered grid. It would be somewhere between 4X and 16X ordered grid.

Finally, I'm NOT saying that if nVidia just implements Matrox's solution, then nVidia has the edge in AA. I'm saying that there IS room to improve upon ATI's AA, and if nVidia can do it, that could be a good selling point. I just gave Matrox as an example of a "different" AA solution that has at least some advatages over ATI's solution. :)

According to all I read the FAA is OGMS.

4xRGMS is nearly as good as 4x4 OGSS. Because of that the AA of the V5 looks far better than the GF3/4. It is not as good as 4x4 OGSS but as good as 4x4 sparce Supersamling when you look at the edges only.

4x4 OG :

X_X_X_X
X_X_X_X
X_X_X_X
X_X_X_X

4x RG :

X______
____X__
__X____
______X

(only my fast drawing of an scheme, will not be correct)

You can see that the 4xRGMS has one sample point for all X and Y;
and with an correct distribution of sample points it can be nearly as
good as 4x4 OGSS even on angled lines. At least that's what the Z3-paper
says.
 
Joe DeFuria said:
Of course, there are too many unknowns to do much of any real speculating. Maybe nVidia IS doing a "gigapixel" like design, at which point the age old question of "can deferred renderers really compete in the PC market" might finally be answered. Maybe nVidia IS planning on multi-chip boards for the high-end gamer. Maybe they invented their own brand new memory type that delivers 300 GB/sec bandwidth. Who knows. ;)

I consider this very likely (like that would mean much to anyone :LOL: )...I've been thinking working out issues in this is why nVidia has been delayed ever since there was so much uncertainty and lack of concrete information from the nVidia hype machine as the R300 approached. I don't see them having wasted their cash inflow on reinventing the IMR with more brute force, and ignoring the resources they acquired from 3dfx. It seems to me that the NV30 is most likely:

A deferred renderer with caches out the ying-yang (focused on the high end, and perhaps necessary for high speed triangle set up)...in other words a "brute force" (comparitively to Kyro for example) deferred renderer.

Trying to capitalize on the benefits you and others have listed as possible for a deferred renderer (AA, alternate tile for multichip, and some other enhancements and features based on the architecture shift).

It seems to me that doing the above with DX 9 specs would explain the delays on the nVidia engineering team executing, and why they have been stuck on the brute force enhancements to the base GF 3 technology for so long.

I think this is the "revolution" nVidia intends (the fact that ImgTech has been advocating deferred rendering wouldn't matter from their perspective, ImgTech has failed to deliver a high end focused part so far).

I think this would explain how they could compete performance wise with a 128-bit bus, and why there has been no noise (that I know of) about having a 256-bit bus from them. My understanding is that a deferred renderer could open up the bandwidth of even a 128-bit DDR interface to some pretty flagrant pixel pushing accomplishments.

That said, I could be Completely Wrong (Duh!), but this is the specific theory I'm sticking with based on my admittedly limited knowledge about 3D technology...I simply can't think of anything else that better fits the past performance of nVidia engineering-wise, their behavior and lack of innovation in the NV2x family, and even beginning to fulfill their claims and the hints of their cheerleaders (Anand's comments come to mind, but to be fair he did mention ATi's upcoming part in his GF 4 preview, though positive statements for nVidia are always emphasized more than for ATi in both previews).

So, tell me why I'm wrong, and maybe some better theories will arise. ;)
 
There's one reason why I don't think it's a deferred renderer.

Quote from What comes after 4? from NVidia
Yes so...
• Consider laying down the Z buffer first
• Draw your objects into front-to-back order
• But this isn’t a per-poly sort...
• This allows you to minimise the overall cost by not spending time on unseen pixels
They wouldn't recommend something like this if their upcoming architecture wouldn't benefit from it.
 
Xmas said:
There's one reason why I don't think it's a deferred renderer.

Quote from What comes after 4? from NVidia
Yes so...
• Consider laying down the Z buffer first
• Draw your objects into front-to-back order
• But this isn’t a per-poly sort...
• This allows you to minimise the overall cost by not spending time on unseen pixels
They wouldn't recommend something like this if their upcoming architecture wouldn't benefit from it.

Thanks! And here's something more from that link: ;)

What else will we need?
• We already have vertex and pixel processors…
• We also need to add a primitive processor:
• With access to the connectivity information of the
mesh
• So that you can destroy primitives inside the GPU
• e.g. Local LOD management were it counts
• So that you can create primitives inside the GPU
• Implementation of arbitrary HOS in your app without
me having to tell my architecture team exactly what
you want as much as three years before you want it
 
A little bird told me NV30 will have everything Richard Huddy mentioned in that paper. He also told me that the most exciting things the NV30 will employ are not mentioned in that paper...and that we have to wait and see :)

ciao,
Marco
 
ben6 said:
A little bird? Bah you're a tease :LOL: ;) (heh I had to do that to someone since everyone does it to me :) )
It shouldn't be difficult to figure out who the little bird is....this time the simplest answer is the correct one :)

ciao,
Marco
 
Hrm how do I want to put this? Nvidia is fine. The real battle (if anyone thinks more than 5-10% of the market will pay $300 for a videocard they're crazy) will be in the DX9 mainstream parts. Dave Orton (a real standup guy had cocktails with him over at the Fairmount Hotel in San Francisco), said there's likely to be a mainstream card (under $200) with DX9 features from ATI next year, likely in Spring but he only said 1st half next year. Here's a couple of interesting quotes,


From Anandtech:

And looking towards next year, the RV250 won’t be able to hold it’s own against what NVIDIA has planned for the GeForce4 MX; but after seeing what ATI has been able to pull off with the R300 we can’t wait to see what else they’ve got up their sleeves

What is Nvidia planning with their "Geforce4MX Plus" (for lack of a better term).

Second , historically Nvidia has released a "mainstream/value" part the next year from the launch of the high end the previous year.

They did this with TNT (Vanta) , TNT2 (M64), Geforce DDR (2 MX), Geforce3 (Geforce4 MX (though I dislike this name very much), so what can we expect from NVIDIA's future value parts?

It's going to be interesting to see how cost effective and what NVIDIA will do to cut down and cost reduce their DX9 high end to reach the mainstream. If that's the direction they're going. But it's ATI's goal and presumbably NVIDIA's goal as well , to drive DX9 into the mainstream next year.

We already know that SIS will be there with Xabre2, Via with the elusive Columbia and presumably Matrox and 3DLabs will update their cards.

It's going to be interesting, and I'm glad I'm here to watch the ride.
 
Back
Top