NV30 = no 256 bits wide data bus?

Psurge:
Can the nv30 afford very large caches when it already must use FP throughout the pipeline?
I should have said larger with higher associativity than what they have now. Anyway it will be using a .13 micron process right? Then they have some silicon space.

Actually, those are for the "professional" versions of 3D/Creative Labs' P10 boards which are due out in the next month or two. We have no idea what Creative will charge for the "consumer" version of the P10, which is due out "by X-Mas". Certainly, it doubt it will be as high as $600 for a 128 Mb board.
I was thinking that consumer cards would be available only next year. This is good news.
 
Tagrineth said:
We should also have PowerVR Series4 out by the end of the year (for Teasy's sake, by August... ;) ).

Is this still about his promise to cut off his testicals if not.....? I guess one shouldn't be "that" sure about things like that.
 
kl899 said:
Don't forget that none of us on this board are average consumers. We have highly opinionated views of 3D performance. I think the answer is that the average consumer DOES care about raw speed in existing games. That's what they can brag to their friends about, etc. Let me tell you something... my roomate went from a GeForce 2 MX to a GeForce 3 TI 500. Guess why he made that switch? To make his Counter Strike framerate faster. Both of us know that the GeForce 2 MX is plenty enough to run Counter Strike but people always want to have the fastest hardware, not necessarily for more advanced quality features.

Going from a GF2 MX to a GF3 TI500 makes a huge difference in performance. I don't think that the difference between the NV30 and other next gen cards will be that big.

Also, a TI4600 runs Quake 3 at:
374 fps at 640*480
257 fps at 1024*768
(from Tom's hardwares review of the latest p4)

And a NV30 will hardly be slower. So i kinda doubt that Quake3 will cause problems (compared to other cards) for the NV30 afa benchmarking goes.
Even if it will be a bit slower.

I certainly hope that non FSAA, anisotropic benchmarks will be a thing of the past. I know that 640*480 is still used by many reviewers but it's hardly used as a measurement of the card anymore.
1024*768 without any FSAA, aniso is close to getting obsolete also IMO.

As far as FSAA goes....this is where things are going to get hairy. The tendency, of course, is for the cookie-cutter sites to just run bencharks with "2X FSAA" enabled, and comparing FPS...nevermind that each vendor probably has their own way of doing FSAA and trying to compare actual image quality between the two. So I agree that FSAA (and advanced filtering) tests SHOULD be more important this fall, I have low expectations of review sites actually doing it "right". And the "winner" will be the board that simply gets the higest FPS at the equivalent "setting". (Again, setting meaning "2X, 4X" etc., not necessarily equivalent image quality as it should.)

Well, i'm afraid of that too. But luckily, we have sites like Beyond3D :)
Unfortunetly, the masses reads sites like Anandtech and Tom's so they might not get the right picture..
 
I fail to see how an emphasis on "better quality pixels" doesn't have as dramatic of an impact on bandwidth compared to "more pixels"..

It seems to actually be the reverse case- as additional samples or oversampling data is very bandwidth hungry, especially when it comes to textures/texturing.

The only thing I can guess is if indeed NVIDIA is planning on eating their worlds on tilers, or have some form of loopback/reject processing that can add even more efficiency to what they have in the GF4 today.

So, looks like WarpSpeed Memory Architecture I (WSMA-I) will be the new (tm) for the next card? :)
 
Well it seems this should settle this debate on what the Nv-30 uses.The Nv-30 will be using DDR-2 @ 900 MHz.If the effective frequency is that high then we should expect around 28.8 GB/s of Bandwith,I think.






Since some sites are poking out info regarding the NV30, I might as well spill what I know. You might think this is just rumors, but this is straight from Nvidia who held a Seminar to some top end developers. So if these end up being wrong, blame Nvidia as they are the ones who held the seminar. Here is the info:

I caught a couple of NV30 specs for you. First the RAM will be running at 900Mhz. Secondly, they are claiming at this point 200 million polys per second.

Will Nvidia be using the DDR II memory that is on Samsungs product description pages? Will have to wait for that info, but it's apparent that Nvidia is banking on massed produced 900Mhz modules from Samsung. The 200 million polys per second is almost doubled that of the GeForce 4. I believe the GeForce 4 is what? 124? or something around that.
 
Nah...you're making the same mistake I did a little while ago. ;)

There is no such thing as a 900 Mhz DDR-II module any time soon. He would be referencing 450 MHz DDR-II, for "effective 900 Mhz."

That would be 14.4 GB/sec. (Still not too shabby, of course!) Having said that, I don't recall the projected production schedule of DDR-II, so I'm not sure if that' feasible for a September launch of NV-30.
 
That kind of ram doesn't exit at all, at least on Samsung's site.
First DDRII modules are sampling in november and are clocked at 133 mhz according their site, so they are slower than actual 'fast' ddr chips.
The 200 Mpoly/s figure is probably a Mvert/s figure keeping an 1:1 vertices to polys ratio. That would be the max theoretical troughput for a 2 vertex shaders architecture (like on the nv25) clocked at 400 mhz, fairly possible with a 0.13 micron process.

I don't know what to read in Mr. Kirk and Mr. Jen-Hsun Huang words.
Maybe for the first time nvidia is making an attempt to substantially change their architecture. Are they slowly migrating towards some kind of deferred renderer? and does it make any sense?!
Anyway, is nice to see that Richard Huddy in its GDC presentation about the stuff that will be in nv30, proposed a deferred rendering approach driven by developer/software :)

ciao,
Marco
 
I think I got the "effective Mhz" wrong with DDR-II....Someone want to explain to me in lay-man terms, the advanatge of DDR-II over DDR?

From what I understand, the "effective" Mhz (meaning the comparative theoretical Mhz of a standard SDRAM ) of a DDR-II module is 4X the actual Mhz clock. It doubles throughput per clock compared to DDR....not by transferring more frequently per clock (so it's not technically "quad pumped"), but it transfers larger amounts of data per transfer.

So, the first, 133 Mhz DDR-II modules have the same theoretical bandwidth as a 266 Mhz DDR module, correct?

It also seems like with every article I read on DDR-II speeds...I can't tell if they are talking about "effectve SDR" Mhz, "effective DDR" Mhz, or actual clock rate. I've heard of DDR-II "debuting" at 400 Mhz...does that mean 100 Mhz Actual Clock? Sigh...
 
From what I understand, DDR II's data bus runs at twice as fast as its address bus, just like DDR. However, its internal memory array is slower, 1/4 of the data rate. Its minimum burst length is twice as long as DDR.

Therefore, a "400Mhz DDR II" is just like a "400Mhz DDR." Although DDR II may be a bit slower.
 
So then what's the point? ;) My understanding is that it will also require a new kind of module. That's another costly change. Does DDR-II cut manufacturing costs for the memory itself, or allow ramps to higher clock speeds?
 
Joe, I heard that at least in some DDRII devices they could implement small SRAM caches and if you design well the memory controlle you can increase efficiency a lot which is what DDR really needs
 
From what I understand, DDR II's data bus runs at twice as fast as its address bus, just like DDR. However, its internal memory array is slower, 1/4 of the data rate. Its minimum burst length is twice as long as DDR.

Therefore, a "400Mhz DDR II" is just like a "400Mhz DDR." Although DDR II may be a bit slower.
PCChen you are right.

From DDR-II paper:
- there is only one burst length (4) reduce test costs.
- No half cycles, reducing test costs.
- No interrupt commands, improve yields and reduce test costs.
- Dram core has prefecth of 4, improving yields at high frequency.
- Device and pinout optimized for low cost.
- No required custom DRAM controller.

I would like to know if there will be any NV30 Ti200 for sub $200 market. Probably it will be limited to 300 or 350Mhz DDR, with luck maybe 400Mhz DDR.

IMHO bandwith is still a problem.
 
pcchen said:
Therefore, a "400Mhz DDR II" is just like a "400Mhz DDR." Although DDR II may be a bit slower.
So what you're saying is that they are using DDR as the baseline representation of speeds & not SDR ? :-?

I think I might be confusing the spec's but here is an example of how I understand it with DDR-II & DDR.{E.g 450 MHz DDR-II=900 MHz normal DDR=1800 MHz SDR}

Or is this just QDR that I'm getting mixed up with because the functionality of the meaning pertaining to DDR-2 seems to be also somewhat of a QDR spec.
 
Who knows...Maybe Kirk was intentionally stating this to "throw them for a loop." IE Matrox/ATI.

And this from Anand's P10 article:

Because of the use of BGA memory it becomes easier to route traces making 256-bit DDR memory buses a reality for more than just 3DLabs, they're simply the first to introduce it. While 3DLabs hasn't released any card specs yet, they are claiming over 20GB/s of memory bandwidth is possible with the P10 meaning that they'd need at least 312.5MHz DDR SDRAM. Considering that the current GeForce4 Ti 4600 uses 325MHz DDR SDRAM, it's very possible that you'll see cards with over 20GB/s of memory bandwidth.

Obviously, this doesn't say anything about NV30, one way or the other...Perhaps he was strictly talking about Parhelia and/or R300...

On the other hand, the fact that nVidia's CEO stated that NV30 would be a "fundamentally new architecture" and "It is the most important contribution we've made to the graphics industry since the founding of this company" makes one think that we shouldn't think of it with respect to previous products...

If one were to assume that nVidia was not going to go down the path of a 256-bit interface...the likelihood if incorporating 3dfx technology...and add in the quotes from their CEO...Is it not possible that nVidia might go some sort of multichip route/path? From the standpoint of the core architecture, this would certainly be fundamentally new. We all know the stance that nVidia has had with regards to 3dfx's multichip solutions of the past...But, it would allow them to have more direct control over the scalability of their products...and might, in theory, be more cost conducive than what Matrox/3DLabs is doing...

Somehow, I have my doubts that NV30 is simply going to use only higher freq. memory and add some additional logic to their memory controller.
 
The purpose of DDR-II is NOT to improve bandwidth or anything over DDR-I... the fixes are for signal integrity problems which are cropping up with DDR-I. Baaaaad signal integrity issues are coming up fast.
 
Tagrineth said:
The purpose of DDR-II is NOT to improve bandwidth or anything over DDR-I... the fixes are for signal integrity problems which are cropping up with DDR-I. Baaaaad signal integrity issues are coming up fast.
I understand now,which explains the 2 on the DDR-2 spec which basically allows for a much better signal quality & this can also allow a much higher Frequency to be attained
 
Back
Top