NV30 transitor count still at 120 million or has it changed?

The last time we heard of the transistor count on the NV30, it was 120 million. It supposedly underwent changes in response to the R300 design, but it was never stated whether this 120 number came previous to them, after, or maybe before or after they removed the programmable primitive processor. Does anyone know about this?
 
It takes a huge effort to significantly design or redesign a processor this complex. The basic design has almost certainly been frozen since the Spring.
 
I think last I heard the transistor count was "over 100 million." (Sorry, I can't remember the source).

To me, this sounds like it could be down from ~120 million.

Remember, though, when it comes to transistor counts, size doesn't matter.
 
antlers4 said:
I think last I heard the transistor count was "over 100 million." (Sorry, I can't remember the source).

To me, this sounds like it could be down from ~120 million.

Remember, though, when it comes to transistor counts, size doesn't matter.

It was from the Hong Kong presentation a few weeks back...
 
Hmm, I maybe smelling 3 vertex pipelines, like some of you speculated in the other NV30 thread. Hopefully there will be n-pathes/displacement mapping support.

Maybe there are 4 vertex pipes but all the logic having to do with encoding the floating point texture formats in the pixel shaders (cubemaps, 3d textures, etc.), as on the R300, is gone. With less registers (16 vs 32 at least in the vertex shader pipeline), and only rectangle texture support, maybe it will contain 4 VS, 8 PS, and 2 tmu's which are more complex, justifying the 10 million less transistors with respect to the R300. Maybe it won't have the 256 bit memory controller, who knows.

What do you guys think?
 
few thoughts:

first, the basic NV30 design will NOT have changed much in response to the R300 - GPU designs are locked down many months before release.
at best, Nvidia could have done some tweaks and adjustments to NV30 and perhaps they are re-working the final core and memory speeds, but the transistor count/ design would not have changed at all. I believe the design was finalized early this year.

The prospect that NV30 may have less than the almost annouced 120M transistors, that is interesting...

Also, if Nv30 has only a 128-bit bus, then I would expect NV35 (fall 03 product) to move to 256-bits.
 
Luminescent said:
Hmm, I maybe smelling 3 vertex pipelines, like some of you speculated in the other NV30 thread. Hopefully there will be n-pathes/displacement mapping support.

Isn't Displacement Mapping part of DirectX 9? So NV30 will surely have displacement mapping support to be a DirectX9 part.
 
crystalcube said:
Luminescent said:
Hmm, I maybe smelling 3 vertex pipelines, like some of you speculated in the other NV30 thread. Hopefully there will be n-pathes/displacement mapping support.

Isn't Displacement Mapping part of DirectX 9? So NV30 will surely have displacement mapping support to be a DirectX9 part.

Yes, it is IIRC.
 
NVIDIA calls displacement mapping "Render To Vertex Array", which is in their CineFX architecture, which most probably means it will be in NV30.
 
But would the NV30 remain competitive if it only sported 3 vertex shaders? Wouldn't the R300 just be able to ramp up the clockspeed, add ddr-2 support, and pounce the NV30 on current software (4 VS, 256 bit memory bus)? Maybe it is not as simple as that, I am just looking at worst case scenario for NV30. It would be hard on nvidia if they were cought with their pants down this late into the game, but hopefully, for the good of the advancement of tech, the NV30 surpasses the R300 in both hardware features and performance, if it doesn't my hat will come off once more for the 9700 team.
 
Richard Huddy has frequently said games are almost never bottlenecked by the T&L hardware, but usually bottlenecked by bandwidth or CPU (spin-locked or GPU waits)

I bet future GPUs are gated by pixel shader performance, not vertex shaders.
 
Luminescent said:
...hopefully, for the good of the advancement of tech, the NV30 surpasses the R300 in both hardware features and performance, if it doesn't my hat will come off once more for the 9700 team.

So by your logic because Nvidia is yet to do what ATI has done with R300, ATI it isn't worthy of recognition that technology has already been advanced by them?
Perhaps we should look at the current state of the industry(technology wise),and who the real innovator has been for the last year.

Last i recall Nvidia's latest chip to date only supports an incomplete feature set of Dx8.1(No PS1.4) and is the same speed bumped tweaked core since the geforce3.
 
gkar1 said:
So by your logic because Nvidia is yet to do what ATI has done with R300, ATI it isn't worthy of recognition that technology has already been advanced by them?
..

He did say "my hat will come of once more" didn't he ?
 
Reverend said:
NVIDIA calls displacement mapping "Render To Vertex Array", which is in their CineFX architecture, which most probably means it will be in NV30.

Render to vertex array isn't really displacement mapping in the same sense. You render to a floating point texture, then you do a glReadPixels() into VAR buffer from which you can then render. This is more of a hack than a hardware feature and doesn't support any on the fly LOD selection.
 
I don't think render to vertex array is how the NV30 will officially support DM. It's merely another feature NVidia added, which is the ability to treat a render target as a vertex buffer.

In this sense, it's not really DM, but geometry creation/destruction from within a pixel shader. A poor man's programmable tesselation.
 
Gkar1, by my logic, Ati deserves major recognition, however it would be illogical if technology which is developed at a later date or for longer time is not an improvement. If it is not a significant improvement, at least with respect to Nvidia's goals (which I'm pretty sure are to surpass Ati in the performance and features arena), then it is lagging the pace of Ati and the advancement of the vpu. That is what I meant.
 
DemoCoder said:
I don't think render to vertex array is how the NV30 will officially support DM. It's merely another feature NVidia added, which is the ability to treat a render target as a vertex buffer.

In this sense, it's not really DM, but geometry creation/destruction from within a pixel shader. A poor man's programmable tesselation.

You can't treat the render target as a vertex buffer, you'll need to call glReadPixels to copy the pixels to vertex buffer.

The most important question is though, does NV30 support displacement mapping in the Matrox sense (or however I should word it) at all? At least it doesn't look so when reading this topic at opengl.org, especially read mcraighead's comment (he's a driver writer at nVidia).

Oh yeah. On the topic of displacement mapping, there is at least one way to accomplish this. Probably more ways exist that I haven't thought of.
It will sound slow at first; bear with me.

Render into a float buffer surface, using whatever sort of cool fragment program computation you want to displace your vertices. Your "color" output in RGB is just your vertices' XYZ position.

Use ReadPixels. Then, point your vertex array pointers at your "pixel" data you just read back, and blast it back into the HW as vertices.

Slow because of ReadPixels? Not really, at least if you use the (new) NV_pixel_data_range extension. Use wglAllocateMemoryNV to get some video memory. ReadPixels into that video memory using what is known as a "read PDR", and then use VAR to draw the vertices. No bus traffic required.

Your indices can just be constants that represent your surface topology.

- Matt
 
So it seems there is a possibility that NV30 might achieve displacement mapping even though it might not have any hardware support for it.
 
Back
Top