some strange reason i have a post to make-re NV30

BoardBonobo said:
But if you implemented two 128bit buses and a memory controller that could interleave the flow, wouldn't that give you a theoretical 32Gb\sec bandwidth?

Oh! Oh! And if you implement four 128bit buses you get 64GB/s! Or 8 buses for 128GB/s! Or 16 for...

What exactly is your point BoardBonobo?
 
I was just wondering if nVidia sticks with a 128bit bus design, as they have alluded to, whether it would be possible to build the card around a dual bus design. Is this harder to implement than 256bit at the controller level and would it actually deliver a 32Gb\sec throughput.
 
BoardBonobo said:
I was just wondering if nVidia sticks with a 128bit bus design, as they have alluded to, whether it would be possible to build the card around a dual bus design. Is this harder to implement than 256bit at the controller level and would it actually deliver a 32Gb\sec throughput.

The GeForce3/4 cards use a Quad 32-bit bus design. The R300 uses a Quad 64-bit bus design. The NV30 won't drop down to a dual bus design.
 
>>But if you implemented two 128bit buses and a memory controller that could interleave the flow, wouldn't that give you a theoretical 32Gb\sec bandwidth?<<

I think what you're talking about is sorta what DDR does. DDR is, more accurately, twice the bus width multiplexed onto half the pins... so a 300 mhz, 128 bit DDR bus would be essentially the same as a 300 mhz, 256 SDR bus. The memory will present the lower half of the 128 bits on the first half of the clock cycle, and the upper half on the upper cycle. That's why DDR is somewhat less efficient than SDR, since you run into granularity problems with a wider bus. That's why NVIDIA split their 128 bit bus into 4 32 bit buses... that way you can read 4 64 bit chunks every cycle rather than 1 256 bit chunk. Since every bit you read has to be linearly adjacent, and not all the data you read every clock cycle *is* adjancent to each other, this can help a lot.

DDR2 doubles the data rate once more, and, I'm assuming doubles the effective bus width once more as well... at the afforementioned 300 mhz clock rate, SDR would have a 300 megabits per sec per pin transfer rate, DDR would have 600 mbps/pin, and DDR2, 1200.

So, at equal clock speed, DDR2 on a 128 bit bus is just as fast as DDR on a 256 bit bus, except you can lose some efficiency. If NVIDIA sticks to their 4x 32 bit bus controller and ATI has their 4x 64 bit bus, and everything else is equal, they could have about the same raw effective throughput. However, DDR2 will be difficult to clock as high as current DDR modules because timing constraints are much, much tighter (multiplexing 4 signals across a pin per clock cycle instead of just 2)

If ATI's memory controller is capable of using DDR2 on a 256 bit bus (it's possible it's the same sort of thing as the Geforce 2 MX's, where the halved memory controller was capable of a 128 bit SDR or 64 bit DDR bus), and they use that capability in the near future, the NV30 can't come close in raw memory throughput.

Actual effective throughput however is dependant on a lot more than just the speed of the memory, however... if NVIDIA's memory architecture has a bit less raw bandwidth, it could make up for it in efficiency.... or the other way around, if ATI is more efficient. NVIDIA is probably extending data compression through the entire framebuffer on the NV30, however; much like the NV20-25 compress the Z-buffer. Since there's a lot of redundant data stored with MSAA, this will likely be *very* effective in increasing antialiasing performance.
 
Once again this Cinema stuff is 'neat' but has no effect on games. Frankly, I don't understand why Carmack is getting so excited about it...it basically has no effect on him. He's a real time engine creator, not a cinematic producer.

God, I hope he's not thinking of trying to move over into the movie industry. He should stick to what he does best, make engines (and he's damn good at it). He's yet to make a good game, I hate to think how bad his movies would be.

Anyway, it's all just speculation at this point. Just because Nvidia is doing something different than ATi doesn't necessarily mean their way is the right way. Their cards have totally different architectures...

I've still yet to see anything on paper or otherwise to suggest the NV30 is going to be significantly faster than the R300, and I still believe they'll be essentially the same speed. Time will tell, I guess. Either way, competition is good for bringing down the prices. :)
 
Once again this Cinema stuff is 'neat' but has no effect on games. Frankly, I don't understand why Carmack is getting so excited about it...it basically has no effect on him. He's a real time engine creator, not a cinematic producer.

Of course it has an effect in games. The effects won't be seen immediately (Did DOOM3 come out immediately after the GeForce1? No...but, as JC said, it's leveraged on that technology...just imagine what we'll see in a few years based on NV30 tech...).

In the short-term, the only effects we'll see are the increased fillrate, and presumably improved FSAA/aniso.

As time goes on, we'll slowly see games start to use the new effects made possible by these cards in a limited fashion. Eventually we'll see full-blown games that require an NV30 (based on JC's statements...and it will probably come from id software first...).
 
>>Once again this Cinema stuff is 'neat' but has no effect on games.<<

No, the POINT is that it'll have an effect on games. The goal is the convergance of the high end cinematic rendering and games. Of course, you'll always be able to render more detailed, more advanced scenes offline, but the gap can, and is being closed significantly with what can be done in real time.
 
11. NV30 and DX9 schedules are now aligned. This may change if Microsoft delays DX9 , but not the other way around.

Don't you all know ATI is working very closly with MS on DirectX9 to as are other graphic companies.

:eek: It is all very vague atm about the directx9 stuff.......to be continued....

8) but from what I know nv30 will lose in most poinst compaired to the ATI....... u will see

Nvidia has a problem atm which could lead to a hudge change in the industry......to be continued not the time to give any more info on it...soz :(
 
I don't think anyone answered this question yet. The nv30 will support 12 bit fixed point, 16 bit float, and 32 bit float. This info was presented on a power point slide.

Someone commented about Lord of the Rings running in real time on the 9700. I was disappointed with this "demo." The scene was very short (a couple of seconds) and not all that impressive. There were a lot of orcs walking and that's it. So basically it showed off a lot of vertex processing power.

I must commend ATi's demo writers though. The demos were very well written. The action can be frozen at any time and stepped through. Also, the split screen modes are very helpful.
 
Back
Top