NV30 - last chance to speculate - reactorcritcal specs

g__day

Regular
NV30 performance spec's from reactorcritical

In Quake III Arena in 1280x1024 with 4x FSAA enabled, NV30 is going to be 2.5 times faster than the GeForce4 Ti4600.
In The Next Doom the board based on NV30 will be able to show 3.5 times or or even more of the performance the current Nvidia`s flagman has to offer us there.

NV30 will score three times more than the GeForce4 Ti4600 in 3D Mark 2001.

Effective HQ Pixel Fillrate (2x anisotropic filtering enabled) of the newcomer with will be about 2.7 times more than that of the fastest NV25.

As for pixel-shading speed, it will be 4 times of the NV25.

Nvidia claims that its upcoming GPU is capable of processing 200 millions triangles per second.

We have no idea if these estimates are accurate as there are no details about them and how they were achieved.

What you really-really want are the actual specifications of the GPU and the boards based on it. In fact, we have already published some interesting things a number of weeks ago in our news-story called “Another blast from the future: NV30â€￾. We were correct in almost all our predictions except the memory speed. Nvidia wants it to be about 1 GHz delivering amazing 48 GB/s bandwidth when accompanied by the 3rd generation of their LightSpeed Memory Architecture. We are not sure that Samsung will be able to deliver them 1 GHz DDR-II memory by September.

* * * * *

Guys and Gals

I don't hold to these statements with any strong conviction, I am just keeping them here as a footnote to what will be announced in 3 days time.

It would be great to see how many conflicting speculations can be put to bed in just 72 hours.

So if you have any last minute speculations - now's your last chance to place a bet :)
 
If that's true.... dren. I will be floored. On one hand, I don't really believe it. On the other... well, that doesn't sound like your typical marketing hype, to me. Decisions, decisions.
 
I've outlined some of the highlights from the Inquirer's blurb...in case you're one of those that "doesn't do the Inquirer."

1. 8x Antialiasing
2. 128 tap Anisotropic Filtering
3. 3.5x more perf. in Doom III
4. 400-500 MHz. clock
5. 8 Pipelines
6. 2 TMU's per pipeline
7. 48 GB/sec bandwidth. 128-bit interface @ 1 GHz
8. Still on tap to deliver some parts by Christmas

So, I guess the picture is now much more clear. I really do hope that some actual performance figures will be disclosed next week, but I'm not holding my breath.

As for the obvious 9700 vs. NV30 debate that will surely escalate...Right off the bat, the big difference appears to be in the area of IQ enhancements. Of course, it will be equally interesting to see what sort of performance drop will be seen with 8x AA...and what sort of method is being employed. Maybe some of that Voodoo tech?

Beyond the stock/standard performance edge that NV30 will likely hold over the 9700, I guess the the pricetag is still not known...though I would speculate it being in that $400 neighborhood.
 
Interesting speculations, to say the least..

NV30 will score three times more than the GeForce4 Ti4600 in 3D Mark 2001.

The question here is- the current version of 3DMark2001 or a special version to be released in tandem? hehe. I just dont see 35,000 3dmarks being possible without a 4ghz CPU.
 
The problem being, of course that Fuad doesn't really know what he's talking about:

We can also confirm that with 1.0 GHz data rate plus Lightspeed Memory Architecture 3, a card will be able to reach an amazng 48 GB/s, double what the Radeon 9700 PRO can do with its DDR 256 bit memory. Remember Nvidia will use 128 bit memory this time and will save its breath for 256 bit DDR II for future designs.

It would be interesting to see where he comes up with the figure that the NV30 will have double the bandwidth of the R300! :rolleyes:

Seems like just more regurgitation of marketing material to me.
 
Sharkfood said:
Interesting speculations, to say the least..

NV30 will score three times more than the GeForce4 Ti4600 in 3D Mark 2001.

The question here is- the current version of 3DMark2001 or a special version to be released in tandem? hehe. I just dont see 35,000 3dmarks being possible without a 4ghz CPU.

well at 1600x1200 with 4xAA and 8xAF the Gf4 is quite slow ;) The testing conditions for these comparisons will always be the most favourable for the new product right :)
 
48GB "effective" is actually MORE than [edit: double] the raw b/w of R300 in its current incarnation...

[edit: aargh, I hate it when one forgotten word changes the entire meaning of a sentence!]

If Nvidia is going to start counting never accessed bytes (due to pixel occlusion hardware and an arbitrarily selected overdraw figure) into its bandwidth, I see no reason why ATI should not start using the same dubious practice and soundly stomping that 48GB figure into the ground with a 60+GB/s figure of their own...

It's really shitty marketing, this "effective" bandwidth. It's not even effective bandwidth at all, it's more like EQUIVALENT bandwidth, if you had the pixel filling capacity to do the overdraw also at the same speed as the "non-overdrawn" scene.

Like they say, 3 kinds of lies: lies, damn lies and marketing...


*G*
 
Sharkfood said:
I just dont see 35,000 3dmarks being possible without a 4ghz CPU.

Probably not even then. With the fastest GPUs, 3DMark2001 is more of a host system benchmark than a GPU benchmark.
OTOH, this probably reflects the reality of just about all games out there.

Entropy
 
I never use The Inquirer for any source of Information - whoever asked that should be shot - they push used toilet paper IMHO
 
Typedef Enum said:
Of course, it will be equally interesting to see what sort of performance drop will be seen with 8x AA...and what sort of method is being employed. Maybe some of that Voodoo tech?
Naaah. I bet on some A-buffer like scheme for the AA...they store a couple of fragments per pixel with a 8x8 coverage mask. Samples are sparse, 8 samples per fragment.
With a fixed number of fragments stored per pixel and a IMR architecture final pixel color could depend by primitive submission order..so I hope they went for a deferred architecture too..
Ok..back to reality now...gonna do some work ;)

ciao,
Marco
 
Entropy said:
Sharkfood said:
I just dont see 35,000 3dmarks being possible without a 4ghz CPU.

Probably not even then. With the fastest GPUs, 3DMark2001 is more of a host system benchmark than a GPU benchmark.
OTOH, this probably reflects the reality of just about all games out there.

Entropy

When you use high resolution with aa and anisotropic it is gpu limited.
so whats wrong with that?
So unless you rather play games in 1024x768 with no aa or aniso i think the games are not cpu limited.
 
Sharkfood said:
Interesting speculations, to say the least..

NV30 will score three times more than the GeForce4 Ti4600 in 3D Mark 2001.

The question here is- the current version of 3DMark2001 or a special version to be released in tandem? hehe. I just dont see 35,000 3dmarks being possible without a 4ghz CPU.

Comparitive scores are all about what the baseline is. And the 3 times part is probably with AA and AF when the GF4Ti4600 is scoring on the 6000 range. That makes it 18,000 which is really close to the 20,000-22,000 guess given in the article. Hmm. Or is it that 20,000 is the best w/o AF and AA and when those features are turned on, it drops to the 16,000-18,000 range.

Guess we will find more out on Monday.
 
Sharkfood said:
The question here is- the current version of 3DMark2001 or a special version to be released in tandem? hehe. I just dont see 35,000 3dmarks being possible without a 4ghz CPU.

4 Ghz?

Rather 6... ;)
 
g__day said:
So if you have any last minute speculations - now's your last chance to place a bet :)

Something nobody has said yet: Ken Perlins Noise function implemented in hardware.

The noise function is something that been heavily utilised in a lot of the shader demos (both low level and Cg). Pelin published a paper earlier in the year which introduced improvements for his generic noise function specifically to aid hardware implementation.

A direct quote from Ken back in may:

"My paper looks forward to enabling a fast hardware-accelarated implementation at some point in the near future"

Also Cg has a noise() function although 'not yet implemeted'.

I remember emailing Gary Tarolli (where is he now?)about adding a hardware implementation of the noise function for procedural geometry and textures and he said that he didn't see the use for it.

Rob
 
pocketmoon_ said:
A direct quote from Ken back in may:

"My paper looks forward to enabling a fast hardware-accelarated implementation at some point in the near future"
Ken Perlin patented it.
 
Grall said:
It's really shitty marketing, this "effective" bandwidth. It's not even effective bandwidth at all, it's more like EQUIVALENT bandwidth, if you had the pixel filling capacity to do the overdraw also at the same speed as the "non-overdrawn" scene.

Like they say, 3 kinds of lies: lies, damn lies and marketing...


*G*

To be fair, I don't think it's really fair to slam nVidia for quoting a 48Gb figure, when they've done nothing of the sort (yet). This was just a typically poorly written, factually suspect article written by the Inquirer.

Now if nVidia themselves were to claim that it's 48Gb 'effective' bandwidth was twice that of the R9700 then they'd deserve a roasting, but let's wait until Monday at least.
 
nAo said:
Ken Perlin patented it.

Well he wouldn't give it away now would he :)

Anyone would be able to license his noise function patent for a fee. Any why put a noise function into Cg for future use if the future implementation wasn't in software.

The other great benefit of using Perlins implementation is that you get a function that returns the same result for a given set of input parameters, whether it's implemented in hardware X, Y , Z or software.
 
Back
Top