More Nv-30 speculation !!

From www.reactorcritical.com today - note the comment on 256 bit ram interface.

As I have read over ExtremeTech web-site some time ago, Nvidia’s Derek Perek confirmed the actual launch of the highly-anticipated code-named NV30 graphics processor on the 18th of November at Comdex Fall show.

The NV30 graphics processor has 8 rendering pipelines with 2 TMUs per each, supports beyond DirectX 9.0 features and represents the brand-new VPU architecture from the most successful graphics chips developer so far. Besides, the graphics boards based on the newcomer will feature AGP 8x and 256-bit memory bus. There will be two videoadapters providing different performance, powered by the NV30 VPU, additionally, Nvidia will also issue two graphics cards for professional use. As for the clock-speeds of the up and coming products, the company will offer 325-400MHz range for the VPU and probably something between 700 and 800MHz for the memory.

The prices of the actual boards are to be determined, though, I personally believe they will not be higher than $399.
 
My, look at the variance in rumours these days:

[url=http://www.theinquirer.net/?article=5833 said:
Inqwell's Fuad[/url]]IN THE GRAPHICS CARD BUSINESS, a good chip alone is not enough to make a good card. Of course, you need memory on the card that runs as fast as it possiblly can.

But even that's still not enough. The third key element is finding the most effective way to use the memory on the card, which leads us into marchitecture territory, since the memory implementation is often hidden behind many marketing names, such is LMA [Light speed memory architecture, as used on the Geforce 4 aka NV25 and 28].

Since many of you still have doubts about the NV30, we are about to disclose the final part of the NV 30 puzzle -- its own, unique memory secrets.

As we previously said, the NV 30 will have only 128-Bit interface which is half the speed of the one used on the fastest gaming card around, ATi's Radeon 9700 PRO.

What we haven't yet mentioned is what kind of memory it sports and what is the actual clock speed of it. That’s where the surprise comes in. Assuming that the NV 30 would use standard DDR memory was a big mistake. The NV 30 will use DDR II memory, the (r)evolutionary memory marchitecture. But there's even more.

Many thought that Nvidia will use 700 MHz to 800 MHz memory which is not true, at least for the fastest implementation of chip, as we expect several cards based on same core.

The fastest NV 30 will work on the nice round number of 1000MHz, bringing graphics memory to a 1GHz speed for the first time in history.

One of the key questions is, who has this kind of memory? And if you search the net for a while you will find that Samsung promised volume production of this DDR II memory capable of up to 1GHz speeds in Q1 2003.

This means that some volumes will exist for December, when this card should appear on the shelves. Nvidia's CEO, Jen Hsun, never said that there will be loads of cards but there will be some cards shipping and on the shelves for that time.

DDR II memory at 1000MHz -- almost 400 MHz faster than the memory used on the Radeon 9700 PRO -- are key elements of the NV30's architecture that Nvidia expects will help it blow ATi's offering out of the water.

So, I guess we've told it all now, except for the name of this product. And it's very unlikely be called Geforce 5.
 
Remember the original GeForce was called the GeForce 256. It has a 256-bit bus even though it only had a 128-bit (SDR!) memory interface. So the rumors insisting on a 256-bit bus are not incompatible with the rumors of a 128-bit memory interface.

128-bit interface to DDR II RAM at 500 MHz (1 Gbits per pin) sounds about right to me.
 
Well, one advantage of staying with a 128-bit interface (to DDR II ram) is that you can easily scale a number of products from mainstream over performance to high-end by using different DDRII clock speeds: hmmm, nVidias CEO hinted that they will try and base a lot of their lineup next year on the NV30 design.

Anyway, for me the open interesting question is still what they have up their sleeve in regard to the remark about a 'highly efficient' rendering architcture. It has to be something more than just a tweak to the LMA II.

I hope we'll know more after the 23. october. ;)
 
Well, one advantage of staying with a 128-bit interface (to DDR II ram) is that you can easily scale a number of products from mainstream over performance to high-end by using different DDRII clock speeds:

I don't understand. You can do that and more with an architecture that can not only used 128 bit DDR/DDR II, but 256 bit DDR/DDR II as well. (See Radeon 9500 through Radeon 9700 Pro, and whatever (9700 DDRII variant they are working on).
 
antlers4 said:
Remember the original GeForce was called the GeForce 256. It has a 256-bit bus even though it only had a 128-bit (SDR!) memory interface. So the rumors insisting on a 256-bit bus are not incompatible with the rumors of a 128-bit memory interface

No people here are talking about either a 256 or 128 memory bus not the 4x64 bit rendering pipelines that generated the 256 moniker.
 
Yes, people here are talking about a phyical 256 bit wide bus...but with web sites who get "inside" info....

...They are famous for "misinterpreting" things. For all we know, there are documents circling that say "256 bit architecture", and web-sites "translate" that to mean "256 bit bus."

That's usually how these things happen...

And for the record, I believe nVidia's official stance on the 256 moniker of the GeForce was actually nothing to do with 4x64 bit pipelines. It was some pure marketing term of which the technical relevance was so irrelevant, that I forget what it was at the moment. ;)
 
I think the technical relevance was 128 bit memory interface x 2 (for DDR) = 256. That's how I recall the marketing at the time.
 
Joe DeFuria said:
Well, one advantage of staying with a 128-bit interface (to DDR II ram) is that you can easily scale a number of products from mainstream over performance to high-end by using different DDRII clock speeds:

I don't understand. You can do that and more with an architecture that can not only used 128 bit DDR/DDR II, but 256 bit DDR/DDR II as well. (See Radeon 9500 through Radeon 9700 Pro, and whatever (9700 DDRII variant they are working on).

Well my point was that you have to increase the pin count of the chip with a move to the 256-bit memory bus (the Radeon 9700 chip should have over 1,000 pins). That probably means that you're forced to use a Flip-Chip package even if you just go 128 bit on the board, but I really don't know if that is expensive, so maybe my point is moot.
 
Well my point was that you have to increase the pin count of the chip with a move to the 256-bit memory bus...

Understood, but doubling the bus gives a much larger bandwidth increase than simply increasing memory frequency. (And using more expensive memory has its own increased costs.)

So in other words, I'm just saying that offering both 128 and 256 bit busses, in addition to varying memory speeds, offers a wider variety of products across the performance and price scale, than just using memory speed binning alone.
 
antlers4 said:
I think the technical relevance was 128 bit memory interface x 2 (for DDR) = 256. That's how I recall the marketing at the time.

nope thats not right either,because the original geforce 256 was SDR. How age dims ones memory.
 
Joe DeFuria said:
And for the record, I believe nVidia's official stance on the 256 moniker of the GeForce was actually nothing to do with 4x64 bit pipelines. It was some pure marketing term of which the technical relevance was so irrelevant, that I forget what it was at the moment. ;)

indeed and here is Toms description of why 256 - nicely summed up I beleive;

"But let's get back to the magic '256'. I could hardly believe my ears when I was finally told what the '256' stands for. NVIDIA adds the 32-bit deep color, the 24-bit deep Z-buffer and the 8-bit stencil buffer of each rendering pipeline and multiplies it with 4, for each pipeline, which indeed ads up to 256. So far about the fantasy of marketing people, they are a very special breed indeed"
 
Randell said:
"But let's get back to the magic '256'. I could hardly believe my ears when I was finally told what the '256' stands for. NVIDIA adds the 32-bit deep color, the 24-bit deep Z-buffer and the 8-bit stencil buffer of each rendering pipeline and multiplies it with 4, for each pipeline, which indeed ads up to 256. So far about the fantasy of marketing people, they are a very special breed indeed"

Indeed. Let's see if we can cook something up for them: How does 128 bits precision on eight rendering pipelines sound? It sounds like a 1024 bit VPU!...

... and the retailer goes: Hey, its real awesome, dude! to the poor customer. ;)
 
Wait. There are 120million transitors, so lets say 16 million gates right? 16 million bits of information are being processed at the same time, correct? Well, some of those bits are idle in cache, so (to be conservative) the NV30 is an 8 million bit VPU!!!

Anyone one wishing to hire me for a 6 figure marketing job, please drop me an e-mail.
 
How about both if the 256bit and 128bit bus rumours are true? The high end cards *may* have 256bit busses and the mid to low end cards have 128bit buses?
 
Back
Top