some strange reason i have a post to make-re NV30

My main problem, like the Rendermonkey thread he started then deleted is he seems so anxious to post any 'leaked info' that he doesn't even check the source if its ok to post...

How many threads start with i.e 'Geforce 3 Details'

Then edit with a line...

'Sorry wasn't allowed to post that'
 
IDK, but considering this
I've been corresponding with NVIDIA the last few days on the Siggraph 2002 presentations. Here's what they had to say:
it seems like he did check the source.

That's why I asked what particular part of his post you had a problem with, or were you just questioning his credibility in general.
 
For the record , I didn't delete the Rendermonkey thread. Second, there's a distinction between posting something I'm hearing something about , and a difference between something that a person working for the company in question about. Third there are occasions where I have been participating in a discussion, and suddenly stop posting on the item or delete or edit a post, to reflect that I can't comment on a item.

I do try to use a little discretion. If I sound like a tease, I apologize. It really isn't my intention. In any event, believe what you want about me. I've written a fun witty story on my trip to ATI's launch day , and that will hopefully be my next post .
 
That's why I asked what particular part of his post you had a problem with, or were you just questioning his credibility in general.

I'll answer that one :)

The part that involved a certain Company...begins with the letter 'n'...and the fact that they're reportedly going to unleash this monster, kick-ass product that has...apparantly...been given the 'A-OK'...'Thumbs Up' by Carmack...

I'm sure if Ben were to make a nice post about the 9700/ATI (which he has), you wouldn't get any complaints from him.
 
After reading this iformation.. i am Quite a bit more relaxed about whats going to go down over the next few months..

I appriciate the discussion.
 
Ben I think you are well aware of the teasing posts you make, I'm not attacking you here...I just don't take your information as fact.
I have no problems with your posts but they are not 100 % accurate and save jugement for the offical PR.
 
Why not? DDR2 on a 128-bit bus should put nVidia within roughly 20% of the bandwidth of the R300. I see no reason why this difference cannot be made up for with more advanced memory bandwidth savings technology.

while I respect the reasons for your assumptions in this area i feel the need to point out that the R300 also strappes a Very advanced, effiecient Memory controler to that 256bit bus..as well as HyperZ 3...

Ben6.. I am available if you need a *hug* after all this abuse your taking.. ;)
 
Hellbinder[CE said:
]while I respect the reasons for your assumptions in this area i feel the need to point out that the R300 also strappes a Very advanced, effiecient Memory controler to that 256bit bus..as well as HyperZ 3...

I agree that ATI has done an excellent job with the memory bandwidth efficiency and usage of the R300. The excellent high-resolution FSAA benchmarks prove this. I just feel that it's far from impossible to do better.

In other words, the claim, "DDR2 will not compete with a 256 bit bus," is not certain.

Some reasons why this is not certain:

1. nVidia will certainly use at least four memory controllers, since the GF4 already does. A doubling of the pipelines, and the bandwidth of the controller for use with DDR2, may warrant the use of eight. This should bring the efficiency of the memory interface itself to or above ATI's level (most likely above, given that nVidia's first crossbar controller has been on the market for a year and a half...and by "the efficiency of the memory interface" I'm not talking about usable bandwidth, but usable bandwidth with respect to theoretical).

2. It is very possible that nVidia may actually not use as much memory bandwidth in high-resolution FSAA scenarios. This isn't overly-likely, but still possible. After all, nVidia's been using multisampling for a year and a half now...how hard would it be to believe that they've further tweaked the performance of their implementation?

3. The only theoretical leg-up that ATI has is their mature Hyper-Z technology. But, neither of the previous two generations were enough to outperform nVidia's product...I see little reason why the third generation will be enough.

4. It's almost assured that the NV30 will be able to reach higher clockspeeds, which always improves memory bandwidth usage efficiency.

Anyway, the #1 thing that I'm hoping for with the NV30 right now is that the 3dfx engineers got their way with respect to FSAA quality, that is, that they're using a more advanced sampling pattern than a basic grid. That, and that the NV30 includes greater than 16-degree max aniso. Performance is secondary to that, in my mind.
 
eh who cares , i doubt there will be much of a performance diffrence , not enough for nvidia users to use ati and ati users to use nvidia so lets all chill and have some snow cones with the penquins .... the penquins know all about the nv30 and the r300 and they are saying that the r400 will be the fastest .
 
If you assume that Nvidia is going to use 128bit bus with 450-500 DDRII and R9700 will ship with, lets say, 300MHz memory, you can easily compare the bandwidth. Sure, that will not give you a full picture, but making any assumptions about effective bandwidth at this stage is very premature - you don't know how effective R300s bandwidth management is, and you don't even know what techniques Nvidia will or will not use in NV30.

Hypothetical case:
Assuming 300MHz memory clock for R300, it will have 19.2GB/s bandwidth.
Assuming 128Bit bus and 500GHz memory clock for NV30, it will have 16GB/s bandwidth. This case would give 9700 a 20% raw advantage. That is not impossible to overcome with effective bandwidth management, but considering how little do we know about the real-world effectiveness of bandwidth saving techniques on either card, it is not set in stone.

Without any knowledge of NV30's architecture, bus width, or memory speed/efficiency of either card, you just can't make these assumptions and expect them to be taken seriously. I could go and hypothesize that NV30 will be a tile-based DMR with embedded RAM and 512bit bus to 600GHz memory, but it would be as pointless as what you are doing.
 
Btw, the DDR2 would have to be around 225-250 (900-1000MHz effective) to do what you're describing. This fits well within current rumors.

Anyway, I'm not making completely baseless speculation here. I'd be rather surprised if I was far off the mark in my last post about the memory bandwidth of the NV30.

After all, consider what we've seen:

1. An nVidia employee stating that a 256-bit bus is "overkill."
2. The emergence of DDR2 memory. We know that memory companies have it in some form, and that ATI is prepared to outfit their R300 with it.

As for my comments on the NV30 making better use of its memory bandwidth, that is definitely not known, but neither is it entirely unlikely. I have stated in the past that the place that the NV30 will probably fall behind, if it falls behind anywhere, will be with high-resolution FSAA.
 
Factor in the R300's memory controller also supports DDR II ram and faster ram will be available for card manufacturers like Unitech which like to up the reference spec...we could easily see 350-400 mhz ddr boards.
ATI is shooting for a 325 mhz reference spec on Ram Timing.
 
Chalnoth said:
Btw, the DDR2 would have to be around 225-250 (900-1000MHz effective) to do what you're describing. This fits well within current rumors.

You do know that DDRII and DDR have identical bandwidth, right?
 
Chalnoth.. If an Nvidia employee stated that a 256bit bus is overkill.. He/they OBVIOUSLY said that becuase they new the Nv30 was not going to have one.. because they wanted to spend their transistor count in other areas...


http://www.beyond3d.com/articles/gf4launch/index3.php


This does show the GF4 memory controler.. as stated it is the same as the GF3.. 4 32bit data paths for increased efficency...

Now.. you are actually going to say that a Switched 4 64bit paths crossbar, interleaved controler that has greater access and efficiecy?...is going to be "overkill".??? I cant really get upset about this as I said much the same thing about the Nv30's advanced shaders.. BUT. seriously.. can you really agree with Nvidia and not see it as the PR statement that it is...
 
Geeforcer said:
Chalnoth said:
Btw, the DDR2 would have to be around 225-250 (900-1000MHz effective) to do what you're describing. This fits well within current rumors.

You do know that DDRII and DDR have identical bandwidth, right?

Yeah, I don't know how many times this has to be repeated, but DDR2 is not quad speed memory, it's still double speed. So he should have written "Btw, the DDR2 would have to be around 450-500 (900-1000MHz effective)"

But he might be right about the needed clockspeed. A 992Mhz 128bit bus has 80% of the theoretical bandwidth of a 620Mhz 256bit bus (I think the sample 9700s had 620Mhz memory, feel free to correct me if I’m wrong), and since 128bit buses has higher efficiency (less data BW wasted for small reads), that could be enough to give it the same effective bandwidth.

992Mhz is still a very high speed, I don't know if I believe that at this point a 992Mhz 128bit bus is actually cheaper to implement than a 620Mhz 256bit bus, but if that is what they’ve done, then they probably had their reasons for it.
 
What're the chances that nVidia will employ some form of dual channel memory bus as they have in the nForce chipset? The bandwidth of the bus may only be 128bits but there may be two interleaved buses.

I believe that doubles the bus bandwidth e.g 3.2Gb/sec to 6.4Gb/sec
 
Hellbinder[CE said:
]
and since 128bit buses has higher efficiency (less data BW wasted for small reads

I fail to understand this point...

More granularity. The data would be splint into smaller chunks with 128bit bus (32bits as opposed to 64bits with 256bit bus).
 
There's always a hierarchical Zbuffer and Ned Greene's methods, that has yet to be implemented in Nvidia's offering.

If they can put a full pyramid (only 3 layers for ATI), the bandwidth savings would be quite high (SA had a chart showing the potential gains from a few years ago).

The average case could gain some 20% in bandwidth/overdraw intensive scenes.

Likewise, if they have implemented a very smart FSAA algorithm (say something like Z3) as well as a more adaptive AF algorithm, one could see raw bandwidth leads disentigrating. Efficiency can be just as important as raw bandwidth. Look at Parhelia for an example.
 
Back
Top