NV40 clock speed

I think it's widely acknowledged that NV35 is what NV30 should have been, but to say that NV40 is what NV30 should have been seems a little far-fetched. At a very early stage of design perhaps, but I can't imagine the product that NV40 is (as we know it now) could have had any chance of becoming NV30.

By the way Unit01, check your PMs on Rage3D.

What isn't widely known, let alone acknowledged, is that NV30 was meant to be NVIDIA's spring 2002 product, at least by the time when the chalk-board design was completed. Under that light it was more than enough for that specific timeframe.
 
flf said:
In response to Hellbinder's usual negative remarks:

Consider for a moment the statements by Team Ferrari or Team Porche that they "completely redesigned a new engine for their car this year." You'd look at it and say, "Hey! They have the same amount of cylinders! They're still using the same fuel and oil! They still have cams and pistons and the transmission is functionally the same!"

When you redesign something, you don't necessarily discard every previous good idea you've had. Often you take the best ideas from before and improve upon them with new techniques and processes. Yes, the end result is going to look markedly similar to your previous engines when you're basing yourself upon a generational, evolutionary model.
the most sence ive read in a long time in these or any forums
 
Degustator: Hmm. So you're saying nVidia might say "Oh well, that tech is bad anyway" to a lot of NV3x ideas and go back to the drawing board?
It makes sense for the shared FP/TMU system, which I'd guess is VERY expensive when doing mostly texturing ( transistor count figures sure show that... )
Hmm... Bah, we know so little about the NV4x microarchitecture...


UTtar
 
Uttar said:
Degustator: Hmm. So you're saying nVidia might say "Oh well, that tech is bad anyway" to a lot of NV3x ideas and go back to the drawing board?
I think they can. They are not too happy themselfs with NV3x, you know... =)
 
DegustatoR said:
Uttar said:
Degustator: Hmm. So you're saying nVidia might say "Oh well, that tech is bad anyway" to a lot of NV3x ideas and go back to the drawing board?
I think they can. They are not too happy themselfs with NV3x, you know... =)

Yeah. Bad performance, bad yields - easily their worst product ever.
Still, the way they implemented the technology is horrible. But some of the ideas aren't bad at all, IMO. Hmm, the "FPUs used as 2 TMUs" tech might not be amazing in the Pixel Shader, but it does sound like a good idea for the Vertex Shader, considering how little texturing will be used there in the NV4x generation...

BTW, Check your PM box :)


Uttar
 
...but it does sound like a good idea for the Vertex Shader, considering how little texturing will be used there in the NV4x generation...

Question: do you actually expect any other PS/VS3.0 product to be less flexible than that?
 
RussSchultz said:
Feh. That Mach64 was barely what the 32 was supposed to have done. (And they cheated on the prevailing benchmark of the time!)

(well, they did. :p )

What do you mean by that? When the Mach32 came out, I thought "WOW, it has motion video accleration." The Mach32 had hardware for resizing the video-playback window to any arbitrary size. And that point-sampled (aka 'pixel replication') algorithm made my jaw drop. A number of VGA chips had useless 'scaling' modes, like halving the effective pixel scanout clock (to get a 2X-horizontal size), or having the CRTC double-scan the same scanline (2X-vertical size...which is how most VGA 200/240-line modes are displayed.)

Although I never owned one, am I the only one who misses the 'dual interface' designs of the old ATI Mach8? E/ISA on one side, MCA on the other side! (MCA was IBM PS/2's proprietary expansion bus.) And IBM 8514/A register-level compatibility...I was already very happy with my ATI VGA Wonder+, with its 70ns 512KB DRAM. (My friend's Trident VGA had 100ns DRAM!)
 
asicnewbie said:
RussSchultz said:
Feh. That Mach64 was barely what the 32 was supposed to have done. (And they cheated on the prevailing benchmark of the time!)

(well, they did. :p )

What do you mean by that? When the Mach32 came out, I thought "WOW, it has motion video accleration." The Mach32 had hardware for resizing the video-playback window to any arbitrary size. And that point-sampled (aka 'pixel replication') algorithm made my jaw drop. A number of VGA chips had useless 'scaling' modes, like halving the effective pixel scanout clock (to get a 2X-horizontal size), or having the CRTC double-scan the same scanline (2X-vertical size...which is how most VGA 200/240-line modes are displayed.)

Although I never owned one, am I the only one who misses the 'dual interface' designs of the old ATI Mach8? E/ISA on one side, MCA on the other side! (MCA was IBM PS/2's proprietary expansion bus.) And IBM 8514/A register-level compatibility...I was already very happy with my ATI VGA Wonder+, with its 70ns 512KB DRAM. (My friend's Trident VGA had 100ns DRAM!)

Wow hehe... i must be one of the few rare folks that actually has an ATI Mach32 based EISA card still sitting on a shelf next to me... never had a Mach8 Eisa card, but I do have an ISA here. :)
 
Back
Top