Rumor: R350 out before GFFX and is 10% faster......

martrox

Old Fart
Veteran
I personally have some problems with this, but, as it's called a rumor on the front page VoodooExtreme, I'll throw it out to all of you:

Showdown ATI
Make love, not war. Don’t expect to hear this motto from Nvidia anytime soon. Everyone seems to be in a hissy-fit over ATI’s R350 and the street date of March, which supposedly will be before the GeForceFX. Our source at the top says the R350 will be 10 percent faster and hit the streets before Nvidia even gets their packaging finished. So is the future of Nvidia doomed to follow in 3dfx’s footsteps? Perhaps, but they have a secret weapon: the NV35 coming out in November. Guess what’s been keeping the folks at ATI up all night.
 
Why the end line?

Why do they even mention Nvidia's next product (NV35), just in case it totally loses out to ATI again.

Keep it on the current and upcoming very soon to be released products (Ti4600s, Radeon 9700, GeforceFX and possibly R350)

Speng.
 
That's a pretty pointless rumor, atleast with my guesses on how it could be based even remotely on some actual information:

1) They are paraphrasing something that proposed the R350 is 10% faster than a GFFX. Well, with the GF FX performance so well defined, and with so much in the way of specifics of the comparison, that narrows things down, doesn't it? :-? ...

OR

2) They are paraphrasing something that proposed the R350 is 10% faster than the current R300 (albeit in a distorted fashion that makes it sound like the above). Well, if we had the rumor directly, that atleast could be informative...atleast about the R350...sort of. Which leaves the same big :?: about the performance of the GF FX that we have now.


More ado about nothing, AFAICS.
But maybe someone has some other theories or information about this?
 
First its clear they are not saying that the R350 is 10% faster than teh R300. if it was.. why even release it??

Second. I dont think that 10% faster than the GFFX 500/500 is going to cut it. I get the feeling that within short time we will be seeing the a GFFX or two clocked over 500mhz and with 256mb ram.

I think we might end up with the interesting case of their being a virtual Tie.. (based on the above info)... however..

just looking at the raw bandwidth the R350 *should* be bringing to the table,,, i have to think that the R350 is going to whip the Nv30's a$$ in high resolution FSAA+AF tests. but thats just my opinion.
 
i just notice the comment about Nvidias secret weapon the *nv35* due in November.. and how its keeping ATi up at night.. ROFLMAO!!! :LOL: :LOL:

That is just plain funny!! Does he really believe that? or was it intended as sarcasm???

[Note: R400 in august/sept]
 
Sounds like a yummy situation for the consumer and the second most guaranteed to gather a crowd phrase comes to mind. "Price War!"

Oh yeah, "Cat Fight!" is number one. That's a euphemism for "two girls fighting, maybe they'll make out" btw. :)
 
I'm not convinced the NV30 is ever coming out anymore. I can't believe it's slipping to February. But it's been slipping and slipping for months. Supposedly it's taped out, then no it hasn't taped out. Now supposedly they're moving into production... For all we know it could just be more nonsense to cover up how late this thing is running.

I know if I'm running 10 minutes late to pick someone up, I'll sometimes say that I'll be there in 5 minutes, hoping to make up some time along the way. But, in the end, I'll get there in maybe 8-9 minutes, still more or less running 10 minutes behind...

In other words, we may not see NV30 until March or April at this rate, over a year after it supposedly taped out according to Anand. :rolleyes: I guess whatever credibility he had, has been seriously diminished by this.
 
Hellbinder[CE said:
]First its clear they are not saying that the R350 is 10% faster than teh R300. if it was.. why even release it??

Why release the Radeon 8500 128MB cards? Why release the Radeon 9100?

And anisotropic filtering is primarily fillrate-limited, not memory bandwidth-limited.
 
Yes chalnoth I understand that.. I should have been more clear.. sorry,

With an increased core speed, combined with increased bandwidth.. I expect the R350 to beat the Nv30 in high res FSAA+AF situations.

I was thnking that in my head, i just did not get it out.

you are right 10% faster or even if it wins some loses some.. would be a nice product to counter the Nv30. As it is i expect the Nv30 to be quite faster than the R300 in situations where Fillrate and their 500mhz core come to bear..
 
Chalnoth said:
Hellbinder[CE said:
]First its clear they are not saying that the R350 is 10% faster than teh R300. if it was.. why even release it??

And anisotropic filtering is primarily fillrate-limited, not memory bandwidth-limited.

And? And antialiasing needs bandwidth - and? :idea:
 
They could get a board that was more than 10% faster than an R300 just by clocking R300's at 375--I don't think they'd even need a new die for that, and the memory bandwidth seems to have enough headroom. So I don't think the 10% is relative to the R300.

The R350 is a new die (according to CMKRNL). You wonder what they might have done--extra TMU per pipe (supporting two texture addresses per cycle?), some pixel shader 2.0+ extensions--but to really beat the NV30 in performance (even with their memory bandwidth advantage) they will have to get the clock rate above 400 MHz. I think that's possible on .15 micron, since CPUs did it, but I'm sure if they pulled it off it required a lot of the "hand-tweaking" they claim went into the R300. NVidia certainly gave them enough time to work on it.
 
Chalnoth said:
And anisotropic filtering is primarily fillrate-limited, not memory bandwidth-limited.

This is entirely a matter of implementation.

Current texture sampling hardware tends to target four- or eight-sample-per-clock filtering as the baseline, and then sample over multiple clocks as necessary for more complex filters. It doesn't need to be that way. The hardware could instead be implemented to attempt even more texture samples in parallel, switching the performance bottleneck back to texture reading bandwidth. It's simply a matter of additional hardware expense vs. the perceived returns.

In many ways, building hardware to handle more samples per clock for higher degrees of anisotropic filtering is easier than building hardware to handle more samples for additional simultaneous textures. Sampling for higher degrees of anisotropy can be fairly cache-friendly, for instance.
 
The R350 wont need to be 10% faster... if it's going up against a card with OGMS AA. :)

Let's get real here folks- these high end cards arent being bought to play at 1024x768... they are for running all details maxed with AA + AF in modern games.

What will decide market share is who can deliver the best image quality and features while still maintaining excellent/playable performance.

I only hope the continual delays with the NV30 are to better hone it's xS AA modes or possibly provide some form of gamma adjusted or jittered sample AA, along with improvements in AF performance.
 
Dan G said:
Current texture sampling hardware tends to target four- or eight-sample-per-clock filtering as the baseline, and then sample over multiple clocks as necessary for more complex filters. It doesn't need to be that way. The hardware could instead be implemented to attempt even more texture samples in parallel, switching the performance bottleneck back to texture reading bandwidth. It's simply a matter of additional hardware expense vs. the perceived returns.

But with the so-called "adaptive" AF algorithm used by ATI and apparently also Nvidia from now on, most screen pixels don't undergo any anisotropic filtering even when AF is turned on. It seems extremely wasteful to have more than 8 sampling units per TMU when there are no settings that require more than 8 samples for all pixels. (Of course Nvidia says the degree of "adaptivity" will be controllable via a slider.) So, it doesn't need to be that way, but it probably will be for the forseeable future.

Sharkfood said:
Let's get real here folks- these high end cards arent being bought to play at 1024x768... they are for running all details maxed with AA + AF in modern games.

They are also being bought to be able to play a 2004/2005 game at 1024x768 with medium details and no AA/AF. Not everyone who buys a high end card plans on buying a new one a year later.
 
They are also being bought to be able to play a 2004/2005 game at 1024x768 with medium details and no AA/AF. Not everyone who buys a high end card plans on buying a new one a year later.

Hogwash...that high end card is now a budget card in 2004.
 
Sharkfood said:
What will decide market share is who can deliver the best image quality and features while still maintaining excellent/playable performance.

I would love to believe this, but I don't think this will be the case. I think the competing cards will be very similar in performance and features.

Imo, I think its the company with the best marketing/PR team that will have the market share in the end. In this regard, Nvidia is still miles ahead of ATI.

Edit: This reminds of Intel vs. AMD. Both great CPU's, but try selling an AMD system to a family that have been bomdarded with "intel inside" adds! Its not easy, believe me, I do it on a daily basis.

Nvidia's Marketing team aren't a bunch of dicks, they realize brand recognition goes a long way. ATI should take note!
 
The current trend of recent titles shows that more than anything developers are still using 60 % CPU (platform) and 40% (graphic card)..so in other words upgrading the Platform will deliver more significant peformance than a video card.

The Radeon 9700 is mostly CPU limited in most current Popular titles :

Game of the Year: Dungeon Siege (CPU limited)
BF1942: Most popular online game (CPU limited)
UT2003: (CPU limited)
 
Release date?

I see most of the discussion is about the performance of r350 vs nv30, but one point of this rumor hasn't been mentioned much. Will cards based on the r350 be out before the geforce fx? Is March a realistic date for the r350? Has this story somehow confused the r350 with the rv350?

It certainly would be a great publicity coup for ati to get two next gen cards out before nV can get one out...If anyone has insights into the release date, I like to hear it.
 
You don't want games to be *mostly* CPU limited.. You want them to be *TOTALLY* CPU limited. Then you know the videocard is up to snuff. :)

Of all the games listed, until 6xAA and 16xAF can be applied with little to no change in performance, video hardware still has a ways to go.

The 9700 Pro does an excellent job at this- with an impressively low hit to performance when enabling such additions, but 6xAA + 16xAF is not 100% inclusive for all titles at high resolution. There is still a wide variety of titles that incur a sizeable hit in performance on high-end systems.

This is the gap that needs to be filled. And I would also stress that methods of AA + AF need improvement as well. Anyone that has tried to chase away shimmering/aliasing or artifacts on a 9700 Pro knows there is still room for improvement, which in turn will cost more in terms of video horsepower.

AND.. we still havent seen anything with shaders or additional complexity. This wont truly hit the mainstream until NVIDIA strives to get a sub $200 DX9.0 videocard, much like ATI has with their 9500 series. Until both NVIDIA and ATI can saturate the installed base (which includes the budget market as the highest percentage) with high powered, DX9.0 compliant videocards, the high end will never truly be exercised to it's capabilities.
 
Doomtrooper said:
Game of the Year: Dungeon Siege (CPU limited)
BF1942: Most popular online game (CPU limited)
UT2003: (CPU limited)

Much as I don't like these games I think the following is more accurate:

Game of the Year: The Sims (not limited by anything)
Most popular online game: Counter Strike (not limited by anything)

But we can ignore those since they aren't in fact real games. (although I'd also throw DS into that category) :) :eek: :LOL: :p

Also, DS doesn't count anyway, because it's barely a 3D game, so I don't see how it could ever end up being GPU limited with such simplistic graphics.

Fuz said:
Edit: This reminds of Intel vs. AMD. Both great CPU's, but try selling an AMD system to a family that have been bomdarded with "intel inside" adds! Its not easy, believe me, I do it on a daily basis.

Nvidia's Marketing team aren't a bunch of dicks, they realize brand recognition goes a long way. ATI should take note!

There is no comparison at all between these situations. Intel's brainwashing went to a far greater extent. Also, the reason Intel succeeded was mostly due to their misleading P4 MHz.

Besides, the people who blindly buy an Intel system are the same ones who just play the Sims anyway, or play CS and think they're really "leet" gamers. :rolleyes: In other words the gaming scene tends to have more knowledgeable buyers anyway and a lot of them are teenagers who can't afford to just waste money on an overpriced, underperforming product.
 
Back
Top